datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
ovior/twitter_dataset_1713079281 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2375262
num_examples: 7180
download_size: 1355488
dataset_size: 2375262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nguyenthanhasia/xNot360 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 59102
num_examples: 360
download_size: 21525
dataset_size: 59102
---
# Dataset Card for "xNot360"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Atipico1/NQ_test_preprocessed_with_so_case | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: id
dtype: string
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: original_case
list:
- name: answer
dtype: string
- name: context
dtype: string
- name: distance
dtype: string
- name: question
dtype: string
splits:
- name: test
num_bytes: 25289839
num_examples: 3610
download_size: 13881245
dataset_size: 25289839
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
RachaelFu/total_dataset | ---
dataset_info:
features:
- name: image
sequence:
sequence:
sequence: float64
- name: text
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 79903818
num_examples: 5881
- name: test
num_bytes: 19972013
num_examples: 1471
download_size: 89056012
dataset_size: 99875831
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
CyberHarem/matsurika_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of matsurika (Pokémon)
This is the dataset of matsurika (Pokémon), containing 270 images and their tags.
The core tags of this character are `blonde_hair, long_hair, bright_pupils, grey_eyes, blue_eyes, eyelashes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 270 | 227.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsurika_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 270 | 140.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsurika_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 535 | 259.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsurika_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 270 | 204.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsurika_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 535 | 354.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsurika_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/matsurika_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, facepaint, half-closed_eyes, solo, white_shirt, oversized_shirt, upper_body, collarbone, short_sleeves, simple_background, white_background |
| 1 | 9 |  |  |  |  |  | 1girl, facepaint, full_body, half-closed_eyes, oversized_shirt, short_sleeves, white_shirt, bag, closed_mouth, simple_background, solo, torn_pants, white_background, holding, collarbone, looking_at_viewer, paintbrush, sneakers, standing |
| 2 | 15 |  |  |  |  |  | 1girl, facepaint, half-closed_eyes, simple_background, 1boy, hetero, penis, white_background, solo_focus, upper_body, white_shirt, fellatio, mosaic_censoring, collarbone, looking_at_viewer, open_mouth, tongue, low_ponytail, white_pupils |
| 3 | 31 |  |  |  |  |  | 1girl, facepaint, solo, nipples, bodypaint, navel, half-closed_eyes, pussy, collarbone, simple_background, closed_mouth, looking_at_viewer, completely_nude, barefoot, holding_paintbrush, uncensored, full_body, small_breasts, white_background, shiny |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | facepaint | half-closed_eyes | solo | white_shirt | oversized_shirt | upper_body | collarbone | short_sleeves | simple_background | white_background | full_body | bag | closed_mouth | torn_pants | holding | looking_at_viewer | paintbrush | sneakers | standing | 1boy | hetero | penis | solo_focus | fellatio | mosaic_censoring | open_mouth | tongue | low_ponytail | white_pupils | nipples | bodypaint | navel | pussy | completely_nude | barefoot | holding_paintbrush | uncensored | small_breasts | shiny |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:-------------------|:-------|:--------------|:------------------|:-------------|:-------------|:----------------|:--------------------|:-------------------|:------------|:------|:---------------|:-------------|:----------|:--------------------|:-------------|:-----------|:-----------|:-------|:---------|:--------|:-------------|:-----------|:-------------------|:-------------|:---------|:---------------|:---------------|:----------|:------------|:--------|:--------|:------------------|:-----------|:---------------------|:-------------|:----------------|:--------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 2 | 15 |  |  |  |  |  | X | X | X | | X | | X | X | | X | X | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 3 | 31 |  |  |  |  |  | X | X | X | X | | | | X | | X | X | X | | X | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
lansinuote/gen.4.cats.book | ---
dataset_info:
features:
- name: image
dtype: image
- name: cls
dtype: int64
splits:
- name: train
num_bytes: 87520550.5
num_examples: 2500
download_size: 0
dataset_size: 87520550.5
---
# Dataset Card for "gen.4.cats.book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eminorhan/gutenberg_en | ---
license: mit
task_categories:
- text-generation
language:
- en
size_categories:
- 10M<n<100M
configs:
- config_name: chunk_size_1024
data_files: "gutenberg_en_paragraph_1024.jsonl"
- config_name: chunk_size_2048
data_files: "gutenberg_en_paragraph_2048.jsonl"
---
**Description of the dataset**
This is the November 16, 2023 snapshot of the English subset of the Project Gutenberg corpus (containing 56712 documents in total), downloaded and preprocessed with code from [this repository](https://github.com/eminorhan/gutenberg).
Two different versions of the data are provided:
* The `chunk_size_1024` version divides the data into ~14.2M records consisting of a few paragraph long chunks of text, where each chunk is at least 1024 chars long, and the corresponding metadata.
* The `chunk_size_2048` version divides the data into ~8.2M records consisting of a few paragraph long chunks of text, where each chunk is at least 2048 chars long, and the corresponding metadata.
This dataset is ideal for generating fine-grained embeddings of the documents. |
CyberHarem/dace_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of dace/デイス/鲦鱼 (Azur Lane)
This is the dataset of dace/デイス/鲦鱼 (Azur Lane), containing 14 images and their tags.
The core tags of this character are `blue_eyes, breasts, pink_hair, medium_breasts, ponytail, bow, fang, hair_bow, bangs, long_hair, black_bow, hair_ornament, parted_bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 14 | 14.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dace_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 14 | 8.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dace_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 31 | 18.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dace_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 14 | 12.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dace_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 31 | 27.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/dace_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/dace_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, blush, open_mouth, solo, looking_at_viewer, navel, one-piece_swimsuit, bare_shoulders, elbow_gloves, :d, leotard, simple_background, toeless_legwear, black_gloves, highleg, star_(symbol), straddling, torpedo, watercraft, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | open_mouth | solo | looking_at_viewer | navel | one-piece_swimsuit | bare_shoulders | elbow_gloves | :d | leotard | simple_background | toeless_legwear | black_gloves | highleg | star_(symbol) | straddling | torpedo | watercraft | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:-------|:--------------------|:--------|:---------------------|:-----------------|:---------------|:-----|:----------|:--------------------|:------------------|:---------------|:----------|:----------------|:-------------|:----------|:-------------|:-------------------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
datahrvoje/twitter_dataset_1712968368 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 23457
num_examples: 54
download_size: 11761
dataset_size: 23457
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdapterOcean/med_alpaca_standardized_cluster_40_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 18735815
num_examples: 33030
download_size: 9360981
dataset_size: 18735815
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_40_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DamarJati/indocorpus-mix | ---
language:
- id
task_categories:
- text2text-generation
tags:
- corpus
- indonesia
- text
- parquet
pretty_name: Corpus Indonesia
size_categories:
- 10K<n<100K
---
# Indonesian Corpus
## Description
This dataset contains a corpus in the Indonesian language taken from [Korpus Indonesia](https://korpusindonesia.kemdikbud.go.id/), provided by the Ministry of Education and Culture of the Republic of Indonesia. The corpus is a collection of text in sentence format covering various fields of study, such as Social, Health, Literature, Opinion, Sports, Culture, and others.
## Contents
The dataset consists of texts in the Indonesian language grouped based on specific fields of study or topics. Each text is a collection of sentences sourced from the aforementioned provider.
## Usage
This dataset can be used for various research and development purposes in the field of natural language processing (NLP), text analysis, text classification, and other research that requires text data in the Indonesian language.
## License
The dataset is retrieved from [Korpus Indonesia](https://korpusindonesia.kemdikbud.go.id/) provided by the Ministry of Education and Culture of the Republic of Indonesia. Please make sure to check and comply with the applicable licensing terms from the original source.
## References
For more information about Korpus Indonesia, please visit [https://korpusindonesia.kemdikbud.go.id/](https://korpusindonesia.kemdikbud.go.id/).
|
mask-distilled-one-sec-cv12/chunk_59 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1315014092
num_examples: 258251
download_size: 1336512684
dataset_size: 1315014092
---
# Dataset Card for "chunk_59"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yunij/daig2_tokenized_dataset | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: prompt_name
dtype: string
- name: source
dtype: string
- name: RDizzl3_seven
dtype: bool
- name: cleaned_text
dtype: string
- name: input_ids
sequence:
sequence: int64
- name: attention_mask
sequence:
sequence: int64
splits:
- name: train
num_bytes: 293764426
num_examples: 44868
download_size: 108904729
dataset_size: 293764426
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kgr123/quality_counter_3500_4_buckets | ---
dataset_info:
features:
- name: context
dtype: string
- name: word
dtype: string
- name: claim
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 19329698
num_examples: 1929
- name: train
num_bytes: 19138230
num_examples: 1935
- name: validation
num_bytes: 19531310
num_examples: 1941
download_size: 12858451
dataset_size: 57999238
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
GGital/Signal_Test03 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
splits:
- name: train
num_bytes: 17878235.0
num_examples: 647
download_size: 17876811
dataset_size: 17878235.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Sofoklis/RF00001_short | ---
dataset_info:
features:
- name: number
dtype: int64
- name: name
dtype: string
- name: sequence
dtype: string
- name: spaced_sequence
dtype: string
- name: array
sequence:
sequence: float64
- name: image
dtype: image
splits:
- name: train
num_bytes: 2955519.789473684
num_examples: 85
- name: test
num_bytes: 347708.2105263158
num_examples: 10
- name: validation
num_bytes: 591103.9578947368
num_examples: 17
download_size: 960996
dataset_size: 3894331.957894737
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
harveymannering/jurassic-coast | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 32984479.0
num_examples: 14
download_size: 32973035
dataset_size: 32984479.0
---
# Dataset Card for "jurassic-coast"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
openerotica/freedom-rp | ---
license: apache-2.0
tags:
- not-for-all-audiences
---
This dataset is the result of combing through several reverse proxy logs sets and cleaning them of refusals, duplicate, incomplete, and poor quality responses. Lots of manual quality checks. There's also things like ecommerce descriptions for sex toys and bondage gear, as well as examples of SEO optimized porn video descriptions. I will definitely be improving on this dataset continously; it should be considered a work in progress. My goal is to create a model (or set of models) which can completely replace OpenAI models for erotic roleplay and adult industry use.
Please consider supporting me on Patreon, I'm only asking for about tree fiddy.
https://www.patreon.com/openerotica
I will continue to refine this dataset and release new versions eventually.
Updates
12/2/2023: Removed some BOS and EOS tags were missed by filtering. |
vgaraujov/thesis-chile | ---
license: cc-by-4.0
task_categories:
- summarization
- text-generation
- text-classification
language:
- es
pretty_name: Thesis Chile
size_categories:
- 1K<n<10K
---
### Thesis Chile Dataset
### Dataset Summary
Thesis Chile is the dataset partially used to create the [DiscoEval in Spanish benchmark](https://github.com/OpenCENIA/Spanish-Sentence-Evaluation).
This dataset was created by scraping titles and abstracts of Chilean thesis from public repositories of the Pontificia Universidad Catolica de Chile (repositorio.uc.cl), Universidad de Chile (repositorio.uchile.cl) and Universidad Técnica Federico Santa María (biblioteca.usm.cl).
### Supported Tasks
We see the potential utility of this data for both discriminative and generative tasks. For classification purposes, the title-abstract pairs offer the opportunity to assess semantic similarity or entailment. Conversely, in generative tasks, the abstracts can serve as inputs for models to generate titles (summary).
### Citation Information
```
@inproceedings{araujo-etal-2022-evaluation,
title = "Evaluation Benchmarks for {S}panish Sentence Representations",
author = "Araujo, Vladimir and
Carvallo, Andr{\'e}s and
Kundu, Souvik and
Ca{\~n}ete, Jos{\'e} and
Mendoza, Marcelo and
Mercer, Robert E. and
Bravo-Marquez, Felipe and
Moens, Marie-Francine and
Soto, Alvaro",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.648",
pages = "6024--6034",
}
```
|
fmeleard/moody_data | ---
license: apache-2.0
task_categories:
- summarization
- conversational
language:
- fr
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
tellarin-ai/ntx_llm_inst_german | ---
license: cc-by-sa-4.0
language:
- de
task_categories:
- token-classification
---
# Dataset Card for NTX v1 in the Aya format - German subset
This dataset is a format conversion for the German data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ztphs980/taptap_datasets | ---
license: mit
language:
- en
---
This repository contains a total of 483 tabular datasets with meaningful column names collected from OpenML, UCI, and Kaggle platforms. The last column of each dataset is the label column. For more details, please refer to our paper https://arxiv.org/abs/2305.09696.
You can use the [code](https://github.com/ZhangTP1996/TapTap/blob/master/load_pretraining_datasets.py) to load all the datasets into a dictionary of pd.DataFrame.
An example script can be found below:
```python
from datasets import load_dataset
import pandas as pd
import numpy as np
data = {}
dataset = load_dataset(path='ztphs980/taptap_datasets')
dataset = dataset['train'].to_dict()
for table_name, table in zip(dataset['dataset_name'], dataset['table']):
table = pd.DataFrame.from_dict(eval(table, {'nan': np.nan}))
data[table_name] = table
``` |
mstz/lrs | ---
language:
- en
tags:
- lrs
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Lrs
size_categories:
- n<1k
task_categories:
- tabular-classification
configs:
- lrs
- lrs_0
- lrs_1
- lrs_2
- lrs_3
- lrs_4
- lrs_5
- lrs_6
- lrs_7
- lrs_8
license: cc
---
# Lrs
The [Lrs dataset](https://archive-beta.ics.uci.edu/dataset/93/low+resolution+spectrometer) from the [UCI repository](https://archive-beta.ics.uci.edu).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------|
| lrs | Multiclass classification | Classify lrs type. |
| lrs_0 | Binary classification | Is this instance of class 0? |
| lrs_1 | Binary classification | Is this instance of class 1? |
| lrs_2 | Binary classification | Is this instance of class 2? |
| lrs_3 | Binary classification | Is this instance of class 3? |
| lrs_4 | Binary classification | Is this instance of class 4? |
| lrs_5 | Binary classification | Is this instance of class 5? |
| lrs_6 | Binary classification | Is this instance of class 6? |
| lrs_7 | Binary classification | Is this instance of class 7? |
| lrs_8 | Binary classification | Is this instance of class 8? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/lrs", "lrs")["train"]
``` |
hoangdeeptry/voice_dataset | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: content
sequence: string
splits:
- name: train
num_bytes: 4028550785.313
num_examples: 1523
download_size: 3752995558
dataset_size: 4028550785.313
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "voice_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_BEE-spoke-data__TinyLlama-3T-1.1bee | ---
pretty_name: Evaluation run of BEE-spoke-data/TinyLlama-3T-1.1bee
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [BEE-spoke-data/TinyLlama-3T-1.1bee](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BEE-spoke-data__TinyLlama-3T-1.1bee\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-07T23:10:41.874868](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__TinyLlama-3T-1.1bee/blob/main/results_2024-01-07T23-10-41.874868.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2639999673426553,\n\
\ \"acc_stderr\": 0.031097948131753084,\n \"acc_norm\": 0.26579484215214555,\n\
\ \"acc_norm_stderr\": 0.03189164317426905,\n \"mc1\": 0.2215422276621787,\n\
\ \"mc1_stderr\": 0.014537867601301137,\n \"mc2\": 0.38125036947687346,\n\
\ \"mc2_stderr\": 0.014406514283056868\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.30887372013651876,\n \"acc_stderr\": 0.013501770929344004,\n\
\ \"acc_norm\": 0.3378839590443686,\n \"acc_norm_stderr\": 0.013822047922283524\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.44722166899024096,\n\
\ \"acc_stderr\": 0.004961904949171383,\n \"acc_norm\": 0.602867954590719,\n\
\ \"acc_norm_stderr\": 0.004883037758919969\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768081,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768081\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.28888888888888886,\n\
\ \"acc_stderr\": 0.03915450630414251,\n \"acc_norm\": 0.28888888888888886,\n\
\ \"acc_norm_stderr\": 0.03915450630414251\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.17105263157894737,\n \"acc_stderr\": 0.030643607071677084,\n\
\ \"acc_norm\": 0.17105263157894737,\n \"acc_norm_stderr\": 0.030643607071677084\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.24,\n\
\ \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n \
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.2641509433962264,\n \"acc_stderr\": 0.02713429162874171,\n\
\ \"acc_norm\": 0.2641509433962264,\n \"acc_norm_stderr\": 0.02713429162874171\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2638888888888889,\n\
\ \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.2638888888888889,\n\
\ \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.33,\n\
\ \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \
\ \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.18497109826589594,\n\
\ \"acc_stderr\": 0.02960562398177124,\n \"acc_norm\": 0.18497109826589594,\n\
\ \"acc_norm_stderr\": 0.02960562398177124\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.04023382273617749,\n\
\ \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.04023382273617749\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n\
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.30638297872340425,\n \"acc_stderr\": 0.03013590647851756,\n\
\ \"acc_norm\": 0.30638297872340425,\n \"acc_norm_stderr\": 0.03013590647851756\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2543859649122807,\n\
\ \"acc_stderr\": 0.040969851398436716,\n \"acc_norm\": 0.2543859649122807,\n\
\ \"acc_norm_stderr\": 0.040969851398436716\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.22758620689655173,\n \"acc_stderr\": 0.03493950380131183,\n\
\ \"acc_norm\": 0.22758620689655173,\n \"acc_norm_stderr\": 0.03493950380131183\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.23809523809523808,\n \"acc_stderr\": 0.021935878081184763,\n \"\
acc_norm\": 0.23809523809523808,\n \"acc_norm_stderr\": 0.021935878081184763\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.18253968253968253,\n\
\ \"acc_stderr\": 0.03455071019102146,\n \"acc_norm\": 0.18253968253968253,\n\
\ \"acc_norm_stderr\": 0.03455071019102146\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.23870967741935484,\n \"acc_stderr\": 0.024251071262208834,\n \"\
acc_norm\": 0.23870967741935484,\n \"acc_norm_stderr\": 0.024251071262208834\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.2660098522167488,\n \"acc_stderr\": 0.031089826002937523,\n \"\
acc_norm\": 0.2660098522167488,\n \"acc_norm_stderr\": 0.031089826002937523\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \"acc_norm\"\
: 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.24242424242424243,\n \"acc_stderr\": 0.03346409881055953,\n\
\ \"acc_norm\": 0.24242424242424243,\n \"acc_norm_stderr\": 0.03346409881055953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.22727272727272727,\n \"acc_stderr\": 0.029857515673386407,\n \"\
acc_norm\": 0.22727272727272727,\n \"acc_norm_stderr\": 0.029857515673386407\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.21761658031088082,\n \"acc_stderr\": 0.029778663037752954,\n\
\ \"acc_norm\": 0.21761658031088082,\n \"acc_norm_stderr\": 0.029778663037752954\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.24871794871794872,\n \"acc_stderr\": 0.0219169577092138,\n \
\ \"acc_norm\": 0.24871794871794872,\n \"acc_norm_stderr\": 0.0219169577092138\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.23703703703703705,\n \"acc_stderr\": 0.025928876132766104,\n \
\ \"acc_norm\": 0.23703703703703705,\n \"acc_norm_stderr\": 0.025928876132766104\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.226890756302521,\n \"acc_stderr\": 0.027205371538279472,\n \
\ \"acc_norm\": 0.226890756302521,\n \"acc_norm_stderr\": 0.027205371538279472\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2251655629139073,\n \"acc_stderr\": 0.03410435282008936,\n \"\
acc_norm\": 0.2251655629139073,\n \"acc_norm_stderr\": 0.03410435282008936\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.23669724770642203,\n \"acc_stderr\": 0.01822407811729907,\n \"\
acc_norm\": 0.23669724770642203,\n \"acc_norm_stderr\": 0.01822407811729907\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.33796296296296297,\n \"acc_stderr\": 0.03225941352631295,\n \"\
acc_norm\": 0.33796296296296297,\n \"acc_norm_stderr\": 0.03225941352631295\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.28431372549019607,\n \"acc_stderr\": 0.031660096793998116,\n \"\
acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.031660096793998116\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.2489451476793249,\n \"acc_stderr\": 0.028146970599422644,\n \
\ \"acc_norm\": 0.2489451476793249,\n \"acc_norm_stderr\": 0.028146970599422644\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.36771300448430494,\n\
\ \"acc_stderr\": 0.03236198350928275,\n \"acc_norm\": 0.36771300448430494,\n\
\ \"acc_norm_stderr\": 0.03236198350928275\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.22900763358778625,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.22900763358778625,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2727272727272727,\n \"acc_stderr\": 0.04065578140908705,\n \"\
acc_norm\": 0.2727272727272727,\n \"acc_norm_stderr\": 0.04065578140908705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2777777777777778,\n\
\ \"acc_stderr\": 0.043300437496507437,\n \"acc_norm\": 0.2777777777777778,\n\
\ \"acc_norm_stderr\": 0.043300437496507437\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.3374233128834356,\n \"acc_stderr\": 0.03714908409935574,\n\
\ \"acc_norm\": 0.3374233128834356,\n \"acc_norm_stderr\": 0.03714908409935574\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.29464285714285715,\n\
\ \"acc_stderr\": 0.04327040932578728,\n \"acc_norm\": 0.29464285714285715,\n\
\ \"acc_norm_stderr\": 0.04327040932578728\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.2524271844660194,\n \"acc_stderr\": 0.04301250399690877,\n\
\ \"acc_norm\": 0.2524271844660194,\n \"acc_norm_stderr\": 0.04301250399690877\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.24358974358974358,\n\
\ \"acc_stderr\": 0.02812096650391441,\n \"acc_norm\": 0.24358974358974358,\n\
\ \"acc_norm_stderr\": 0.02812096650391441\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2796934865900383,\n\
\ \"acc_stderr\": 0.016050792148036553,\n \"acc_norm\": 0.2796934865900383,\n\
\ \"acc_norm_stderr\": 0.016050792148036553\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.21676300578034682,\n \"acc_stderr\": 0.02218347766841285,\n\
\ \"acc_norm\": 0.21676300578034682,\n \"acc_norm_stderr\": 0.02218347766841285\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.25027932960893856,\n\
\ \"acc_stderr\": 0.014487500852850423,\n \"acc_norm\": 0.25027932960893856,\n\
\ \"acc_norm_stderr\": 0.014487500852850423\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.025829163272757482,\n\
\ \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.025829163272757482\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.3183279742765273,\n\
\ \"acc_stderr\": 0.026457225067811025,\n \"acc_norm\": 0.3183279742765273,\n\
\ \"acc_norm_stderr\": 0.026457225067811025\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.2716049382716049,\n \"acc_stderr\": 0.02474862449053737,\n\
\ \"acc_norm\": 0.2716049382716049,\n \"acc_norm_stderr\": 0.02474862449053737\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2375886524822695,\n \"acc_stderr\": 0.0253895125527299,\n \
\ \"acc_norm\": 0.2375886524822695,\n \"acc_norm_stderr\": 0.0253895125527299\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24771838331160365,\n\
\ \"acc_stderr\": 0.011025499291443738,\n \"acc_norm\": 0.24771838331160365,\n\
\ \"acc_norm_stderr\": 0.011025499291443738\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.20220588235294118,\n \"acc_stderr\": 0.024398192986654917,\n\
\ \"acc_norm\": 0.20220588235294118,\n \"acc_norm_stderr\": 0.024398192986654917\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.26143790849673204,\n \"acc_stderr\": 0.017776947157528044,\n \
\ \"acc_norm\": 0.26143790849673204,\n \"acc_norm_stderr\": 0.017776947157528044\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2818181818181818,\n\
\ \"acc_stderr\": 0.04309118709946458,\n \"acc_norm\": 0.2818181818181818,\n\
\ \"acc_norm_stderr\": 0.04309118709946458\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.17959183673469387,\n \"acc_stderr\": 0.024573293589585637,\n\
\ \"acc_norm\": 0.17959183673469387,\n \"acc_norm_stderr\": 0.024573293589585637\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n\
\ \"acc_stderr\": 0.030360490154014652,\n \"acc_norm\": 0.24378109452736318,\n\
\ \"acc_norm_stderr\": 0.030360490154014652\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036845,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.29518072289156627,\n\
\ \"acc_stderr\": 0.03550920185689629,\n \"acc_norm\": 0.29518072289156627,\n\
\ \"acc_norm_stderr\": 0.03550920185689629\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.30994152046783624,\n \"acc_stderr\": 0.035469769593931624,\n\
\ \"acc_norm\": 0.30994152046783624,\n \"acc_norm_stderr\": 0.035469769593931624\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2215422276621787,\n\
\ \"mc1_stderr\": 0.014537867601301137,\n \"mc2\": 0.38125036947687346,\n\
\ \"mc2_stderr\": 0.014406514283056868\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6022099447513812,\n \"acc_stderr\": 0.013755743513749027\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.004548900682335102,\n \
\ \"acc_stderr\": 0.0018535550440036204\n }\n}\n```"
repo_url: https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|arc:challenge|25_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|gsm8k|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hellaswag|10_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-07T23-10-41.874868.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-07T23-10-41.874868.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- '**/details_harness|winogrande|5_2024-01-07T23-10-41.874868.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-07T23-10-41.874868.parquet'
- config_name: results
data_files:
- split: 2024_01_07T23_10_41.874868
path:
- results_2024-01-07T23-10-41.874868.parquet
- split: latest
path:
- results_2024-01-07T23-10-41.874868.parquet
---
# Dataset Card for Evaluation run of BEE-spoke-data/TinyLlama-3T-1.1bee
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [BEE-spoke-data/TinyLlama-3T-1.1bee](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_BEE-spoke-data__TinyLlama-3T-1.1bee",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-07T23:10:41.874868](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__TinyLlama-3T-1.1bee/blob/main/results_2024-01-07T23-10-41.874868.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2639999673426553,
"acc_stderr": 0.031097948131753084,
"acc_norm": 0.26579484215214555,
"acc_norm_stderr": 0.03189164317426905,
"mc1": 0.2215422276621787,
"mc1_stderr": 0.014537867601301137,
"mc2": 0.38125036947687346,
"mc2_stderr": 0.014406514283056868
},
"harness|arc:challenge|25": {
"acc": 0.30887372013651876,
"acc_stderr": 0.013501770929344004,
"acc_norm": 0.3378839590443686,
"acc_norm_stderr": 0.013822047922283524
},
"harness|hellaswag|10": {
"acc": 0.44722166899024096,
"acc_stderr": 0.004961904949171383,
"acc_norm": 0.602867954590719,
"acc_norm_stderr": 0.004883037758919969
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768081,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768081
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.28888888888888886,
"acc_stderr": 0.03915450630414251,
"acc_norm": 0.28888888888888886,
"acc_norm_stderr": 0.03915450630414251
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17105263157894737,
"acc_stderr": 0.030643607071677084,
"acc_norm": 0.17105263157894737,
"acc_norm_stderr": 0.030643607071677084
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2641509433962264,
"acc_stderr": 0.02713429162874171,
"acc_norm": 0.2641509433962264,
"acc_norm_stderr": 0.02713429162874171
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2638888888888889,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.2638888888888889,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.18497109826589594,
"acc_stderr": 0.02960562398177124,
"acc_norm": 0.18497109826589594,
"acc_norm_stderr": 0.02960562398177124
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.20588235294117646,
"acc_stderr": 0.04023382273617749,
"acc_norm": 0.20588235294117646,
"acc_norm_stderr": 0.04023382273617749
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.30638297872340425,
"acc_stderr": 0.03013590647851756,
"acc_norm": 0.30638297872340425,
"acc_norm_stderr": 0.03013590647851756
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2543859649122807,
"acc_stderr": 0.040969851398436716,
"acc_norm": 0.2543859649122807,
"acc_norm_stderr": 0.040969851398436716
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.22758620689655173,
"acc_stderr": 0.03493950380131183,
"acc_norm": 0.22758620689655173,
"acc_norm_stderr": 0.03493950380131183
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.23809523809523808,
"acc_stderr": 0.021935878081184763,
"acc_norm": 0.23809523809523808,
"acc_norm_stderr": 0.021935878081184763
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.18253968253968253,
"acc_stderr": 0.03455071019102146,
"acc_norm": 0.18253968253968253,
"acc_norm_stderr": 0.03455071019102146
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.23870967741935484,
"acc_stderr": 0.024251071262208834,
"acc_norm": 0.23870967741935484,
"acc_norm_stderr": 0.024251071262208834
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2660098522167488,
"acc_stderr": 0.031089826002937523,
"acc_norm": 0.2660098522167488,
"acc_norm_stderr": 0.031089826002937523
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.24242424242424243,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.24242424242424243,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.029857515673386407,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.029857515673386407
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.21761658031088082,
"acc_stderr": 0.029778663037752954,
"acc_norm": 0.21761658031088082,
"acc_norm_stderr": 0.029778663037752954
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.24871794871794872,
"acc_stderr": 0.0219169577092138,
"acc_norm": 0.24871794871794872,
"acc_norm_stderr": 0.0219169577092138
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.23703703703703705,
"acc_stderr": 0.025928876132766104,
"acc_norm": 0.23703703703703705,
"acc_norm_stderr": 0.025928876132766104
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.226890756302521,
"acc_stderr": 0.027205371538279472,
"acc_norm": 0.226890756302521,
"acc_norm_stderr": 0.027205371538279472
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2251655629139073,
"acc_stderr": 0.03410435282008936,
"acc_norm": 0.2251655629139073,
"acc_norm_stderr": 0.03410435282008936
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.23669724770642203,
"acc_stderr": 0.01822407811729907,
"acc_norm": 0.23669724770642203,
"acc_norm_stderr": 0.01822407811729907
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.33796296296296297,
"acc_stderr": 0.03225941352631295,
"acc_norm": 0.33796296296296297,
"acc_norm_stderr": 0.03225941352631295
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.28431372549019607,
"acc_stderr": 0.031660096793998116,
"acc_norm": 0.28431372549019607,
"acc_norm_stderr": 0.031660096793998116
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2489451476793249,
"acc_stderr": 0.028146970599422644,
"acc_norm": 0.2489451476793249,
"acc_norm_stderr": 0.028146970599422644
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.36771300448430494,
"acc_stderr": 0.03236198350928275,
"acc_norm": 0.36771300448430494,
"acc_norm_stderr": 0.03236198350928275
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.22900763358778625,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.22900763358778625,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2727272727272727,
"acc_stderr": 0.04065578140908705,
"acc_norm": 0.2727272727272727,
"acc_norm_stderr": 0.04065578140908705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.043300437496507437,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.043300437496507437
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.3374233128834356,
"acc_stderr": 0.03714908409935574,
"acc_norm": 0.3374233128834356,
"acc_norm_stderr": 0.03714908409935574
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.29464285714285715,
"acc_stderr": 0.04327040932578728,
"acc_norm": 0.29464285714285715,
"acc_norm_stderr": 0.04327040932578728
},
"harness|hendrycksTest-management|5": {
"acc": 0.2524271844660194,
"acc_stderr": 0.04301250399690877,
"acc_norm": 0.2524271844660194,
"acc_norm_stderr": 0.04301250399690877
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.24358974358974358,
"acc_stderr": 0.02812096650391441,
"acc_norm": 0.24358974358974358,
"acc_norm_stderr": 0.02812096650391441
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2796934865900383,
"acc_stderr": 0.016050792148036553,
"acc_norm": 0.2796934865900383,
"acc_norm_stderr": 0.016050792148036553
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.21676300578034682,
"acc_stderr": 0.02218347766841285,
"acc_norm": 0.21676300578034682,
"acc_norm_stderr": 0.02218347766841285
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.25027932960893856,
"acc_stderr": 0.014487500852850423,
"acc_norm": 0.25027932960893856,
"acc_norm_stderr": 0.014487500852850423
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.28431372549019607,
"acc_stderr": 0.025829163272757482,
"acc_norm": 0.28431372549019607,
"acc_norm_stderr": 0.025829163272757482
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.3183279742765273,
"acc_stderr": 0.026457225067811025,
"acc_norm": 0.3183279742765273,
"acc_norm_stderr": 0.026457225067811025
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2716049382716049,
"acc_stderr": 0.02474862449053737,
"acc_norm": 0.2716049382716049,
"acc_norm_stderr": 0.02474862449053737
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2375886524822695,
"acc_stderr": 0.0253895125527299,
"acc_norm": 0.2375886524822695,
"acc_norm_stderr": 0.0253895125527299
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.24771838331160365,
"acc_stderr": 0.011025499291443738,
"acc_norm": 0.24771838331160365,
"acc_norm_stderr": 0.011025499291443738
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.20220588235294118,
"acc_stderr": 0.024398192986654917,
"acc_norm": 0.20220588235294118,
"acc_norm_stderr": 0.024398192986654917
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.26143790849673204,
"acc_stderr": 0.017776947157528044,
"acc_norm": 0.26143790849673204,
"acc_norm_stderr": 0.017776947157528044
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.2818181818181818,
"acc_stderr": 0.04309118709946458,
"acc_norm": 0.2818181818181818,
"acc_norm_stderr": 0.04309118709946458
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.17959183673469387,
"acc_stderr": 0.024573293589585637,
"acc_norm": 0.17959183673469387,
"acc_norm_stderr": 0.024573293589585637
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.030360490154014652,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.030360490154014652
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-virology|5": {
"acc": 0.29518072289156627,
"acc_stderr": 0.03550920185689629,
"acc_norm": 0.29518072289156627,
"acc_norm_stderr": 0.03550920185689629
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.30994152046783624,
"acc_stderr": 0.035469769593931624,
"acc_norm": 0.30994152046783624,
"acc_norm_stderr": 0.035469769593931624
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2215422276621787,
"mc1_stderr": 0.014537867601301137,
"mc2": 0.38125036947687346,
"mc2_stderr": 0.014406514283056868
},
"harness|winogrande|5": {
"acc": 0.6022099447513812,
"acc_stderr": 0.013755743513749027
},
"harness|gsm8k|5": {
"acc": 0.004548900682335102,
"acc_stderr": 0.0018535550440036204
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
told-br | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- pt
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: told-br
pretty_name: ToLD-Br
language_bcp47:
- pt-BR
tags:
- hate-speech-detection
dataset_info:
- config_name: multilabel
features:
- name: text
dtype: string
- name: homophobia
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
- name: obscene
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
- name: insult
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
- name: racism
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
- name: misogyny
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
- name: xenophobia
dtype:
class_label:
names:
'0': zero_votes
'1': one_vote
'2': two_votes
'3': three_votes
splits:
- name: train
num_bytes: 2978006
num_examples: 21000
download_size: 2430416
dataset_size: 2978006
- config_name: binary
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': not-toxic
'1': toxic
splits:
- name: train
num_bytes: 1709560
num_examples: 16800
- name: test
num_bytes: 216297
num_examples: 2100
- name: validation
num_bytes: 212153
num_examples: 2100
download_size: 853322
dataset_size: 2138010
---
# Dataset Card for "ToLD-Br"
## Table of Contents
- [Dataset Card for "ToLD-Br"](#dataset-card-for-told-br)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://paperswithcode.com/dataset/told-br
- **Repository:** https://github.com/JAugusto97/ToLD-Br
- **Paper:** https://arxiv.org/abs/2010.04543
- **Leaderboard:** https://paperswithcode.com/sota/hate-speech-detection-on-told-br
- **Point of Contact:** joao.leite@estudante.ufscar.br
### Dataset Summary
ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced by 42 annotators selected from a pool of 129 volunteers. Annotators were selected aiming to create a plural group in terms of demographics (ethnicity, sexual orientation, age, gender). Each tweet was labeled by three annotators in 6 possible categories: LGBTQ+phobia, Xenophobia, Obscene, Insult, Misogyny and Racism.
### Supported Tasks and Leaderboards
-`text-classification-other-hate-speech-detection`: The dataset can be used to train a model for Hate Speech Detection, either using it's multi-label classes or by grouping them into a binary Hate vs. Non-Hate class. A [BERT](https://huggingface.co/docs/transformers/model_doc/bert) model can be fine-tuned to perform this task and achieve 0.75 F1-Score for it's binary version.
### Languages
The text in the dataset is in Brazilian Portuguese, as spoken by Tweet users. The associated BCP-47 code is `pt-BR`.
## Dataset Structure
### Data Instances
ToLD-Br has two versions: binary and multilabel.
Multilabel:
A data point consists of the tweet text (string) followed by 6 categories that have values ranging from 0 to 3, meaning the amount of votes from annotators for that specific class on homophobia, obscene, insult, racism, misogyny and xenophobia.
An example from multilabel ToLD-Br looks as follows:
```
{'text': '@user bandido dissimulado. esse sérgio moro é uma espécie de mal carater com ditadura e pitadas de atraso'
'homophobia': 0
'obscene': 0
'insult': 2
'racism': 0
'misogyny': 0
'xenophobia': 0}
```
Binary:
A data point consists of the tweet text (string) followed by a binary class "toxic" with values 0 or 1.
An example from binary ToLD-Br looks as follows:
```
{'text': '@user bandido dissimulado. esse sérgio moro é uma espécie de mal carater com ditadura e pitadas de atraso'
'toxic': 1}
```
### Data Fields
Multilabel:
- text: A string representing the tweet posted by a user. Mentions to other users are anonymized by replacing the mention with a @user tag.
- homophobia: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as homophobic.
- obscene: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as obscene.
- insult: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as insult.
- racism: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as racism.
- misogyny: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as misogyny.
- xenophobia: numerical value {0, 1, 2, 3) representing the number of votes given by annotators flagging the respective tweet as xenophobia.
Binary:
- text: A string representing the tweet posted by a user. Mentions to other users are anonymized by replacing the mention with a @user tag.
- label: numerical binary value {0, 1} representing if the respective text is toxic/abusive or not.
### Data Splits
Multilabel:
The entire dataset consists of 21.000 examples.
Binary:
The train set consists of 16.800 examples, validation set consists of 2.100 examples and test set consists of 2.100 examples.
## Dataset Creation
### Curation Rationale
Despite Portuguese being the 5th most spoken language in the world and Brazil being the 4th country with most unique users, Brazilian Portuguese was underrepresented in the hate-speech detection task. Only two other datasets were available, one of them being European Portuguese. ToLD-Br is 4x bigger than both these datasets combined. Also, none of them had multiple annotators per instance. Also, this work proposes a plural and diverse group of annotators carefully selected to avoid inserting bias into the annotation.
### Source Data
#### Initial Data Collection and Normalization
Data was collected in 15 days in August 2019 using Gate Cloud's Tweet Collector. Ten million tweets were collected using two methods: a keyword-based method and a user-mention method. The first method collected tweets mentioning the following keywords:
viado,veado,viadinho,veadinho,viadao,veadao,bicha,bixa,bichinha,bixinha,bichona,bixona,baitola,sapatão,sapatao,traveco,bambi,biba,boiola,marica,gayzão,gayzao,flor,florzinha,vagabundo,vagaba,desgraçada,desgraçado,desgracado,arrombado,arrombada,foder,fuder,fudido,fodido,cú,cu,pinto,pau,pal,caralho,caraio,carai,pica,cacete,rola,porra,escroto,buceta,fdp,pqp,vsf,tnc,vtnc,puto,putinho,acéfalo,acefalo,burro,idiota,trouxa,estúpido,estupido,estúpida,canalha,demente,retardado,retardada,verme,maldito,maldita,ridículo,ridiculo,ridícula,ridicula,morfético,morfetico,morfética,morfetica,lazarento,lazarenta,lixo,mongolóide,mongoloide,mongol,asqueroso,asquerosa,cretino,cretina,babaca,pilantra,neguinho,neguinha,pretinho,pretinha,escurinho,escurinha,pretinha,pretinho,crioulo,criolo,crioula,criola,macaco,macaca,gorila,puta,vagabunda,vagaba,mulherzinha,piranha,feminazi,putinha,piriguete,vaca,putinha,bahiano,baiano,baianagem,xingling,xing ling,xing-ling,carioca,paulista,sulista,mineiro,gringo
The list of most followed Brazilian Twitter accounts can be found [here](https://assuperlistas.com/2022/01/21/os-100-brasileiros-mais-seguidos-do-twitter/).
#### Who are the source language producers?
The language producers are Twitter users from Brazil, speakers of Portuguese.
### Annotations
#### Annotation process
A form was published at the Federal University of São Carlos asking for volunteers to annotate our dataset. 129 people volunteered and 42 were selected according to their demographics in order to create a diverse and plural annotation group. Guidelines were produced and presented to the annotators. The entire process was done asynchronously because of the Covid-19 pandemic. The tool used was Google Sheets. Annotators were grouped into 14 teams of three annotators each. Each group annotated a respective file containing 1500 tweets. Annotators didn't have contact with each other, nor did they know that other annotators were labelling the same tweets as they were.
#### Who are the annotators?
Annotators were people from the Federal University of São Carlos' Facebook group. Their demographics are described below:
| Gender | |
|--------|--------|
| Male | 18 |
| Female | 24 |
| Sexual Orientation | |
|--------------------|----|
| Heterosexual | 22 |
| Bisexual | 12 |
| Homosexual | 5 |
| Pansexual | 3 |
| Ethnicity | |
|--------------|----|
| White | 25 |
| Brown | 9 |
| Black | 5 |
| Asian | 2 |
| Non-Declared | 1 |
Ages range from 18 to 37 years old.
Annotators were paid R$50 ($10) to label 1500 examples each.
### Personal and Sensitive Information
The dataset contains sensitive information for homophobia, obscene, insult, racism, misogyny and xenophobia.
Tweets were anonymized by replacing user mentions with a @user tag.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better hate speech detection systems.
A system that succeeds at this task would be able to identify hate speech tweets associated with the classes available in the dataset.
### Discussion of Biases
An effort was made to reduce annotation bias by selecting annotators with a diverse demographic background. In terms of data collection, by using keywords and user mentions, we are introducing some bias to the data, restricting our scope to the list of keywords and users we created.
### Other Known Limitations
Because of the massive data skew for the multilabel classes, it is extremely hard to train a robust model for this version of the dataset. We advise using it for analysis and experimentation only. The binary version of the dataset is robust enough to train a classifier with up to 76% F1-score.
## Additional Information
### Dataset Curators
The dataset was created by João Augusto Leite, Diego Furtado Silva, both from the Federal University of São Carlos (BR), Carolina Scarton and Kalina Bontcheva both from the University of Sheffield (UK)
### Licensing Information
ToLD-Br is licensed under a Creative Commons BY-SA 4.0
### Citation Information
```
@article{DBLP:journals/corr/abs-2010-04543,
author = {Joao Augusto Leite and
Diego F. Silva and
Kalina Bontcheva and
Carolina Scarton},
title = {Toxic Language Detection in Social Media for Brazilian Portuguese:
New Dataset and Multilingual Analysis},
journal = {CoRR},
volume = {abs/2010.04543},
year = {2020},
url = {https://arxiv.org/abs/2010.04543},
eprinttype = {arXiv},
eprint = {2010.04543},
timestamp = {Tue, 15 Dec 2020 16:10:16 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2010-04543.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@JAugusto97](https://github.com/JAugusto97) for adding this dataset. |
MiMe-MeMo/Corpus-v1.1 | ---
license: cc-by-4.0
language:
- da
---
# MeMo corpus v1.1
Jens Bjerring-Hansen, Philip Diderichsen, Dorte Haltrup Hansen, June 2023
This is data release version 1.1 of the MeMo corpus comprising almost all Danish novels from the period 1870-1899, known as the Modern Breakthrough.
The current version of the corpus is publicly viewable and searchable at <https://alf.hum.ku.dk/korp/?mode=memo_all>.
The corpus has been enhanced since version 1.0 with the following 19 titles that have been reprocessed or added to the corpus.
1. Vilhelm Bergsøe: Bruden fra Rørvig (1872)
2. Johanne Schjørring: Rige Dage (1877)
3. Anonymous: Tante Jacobine (1878)
4. Jonas Lie: Rutland (1880)
5. Vilhelm Malling: Fra Kjøbstadlivet i gamle Dage (1882)
6. Adda Ravnkilde: To Fortællinger (1884)
7. Henrik Pontoppidan: Ung Elskov (1885)
8. Therese Brummer: Som man gifter sig (1888)
9. Henrik Pontoppidan: Natur (1890)
10. R.H.: En Kjøbenhavners Livshistorie eller Lykkens Omskiftelser (1891)
11. Henrik Pontoppidan: Minder (1893)
12. Johannes Jørgensen: Hjemvee (1894)
13. Henrik Pontoppidan: Nattevagt (1894)
14. Jonas Lie: Naar Sol gaar ned (1895)
15. Gustav Wied: Ungdomshistorier (1895)
16. Herman Bang: Ludvigsbakke (1896)
17. Cornelia Levetzow: Havemanden (1896)
18. Karl Larsen: Kresjan Vesterbro (1897)
19. Christian Christensen: Kærlighedens Mysterier (1899)
The release contains the following files:
| File | Contents |
| :---------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| texts | Text files of the now 558 novels in the corpus. The text has a newline at line breaks in the book, and two newlines at page breaks. Some of the texts (the ones originally set in Fraktur) have been post-OCR-corrected using a procedure described in Bjerring-Hansen et al. (2022). The rest have been post-OCR-corrected. Error types were identified manually and implementet with look-up in the dictionary (Sprogteknologisk Ordbase, STO) to awoid the creation of new errors. This cautious method has the consequence that not all error were corrected. |
| normalised | Orthographically normalized versions of the 558 texts. Same format as the files in "texts", normalized to Danish standard spelling. Nouns were lower cased, aa changed to å and frequent character patterns changed to obey the Danish orthography norm from 1948. Like the error corrected version of the corpus, character patterns were identified manually and mainly implementet with look-up in the dictionary (Sprogteknologisk Ordbase, STO) to awoid overgeneration. The method has the consequence that not all words were normalized. |
| memo_all.vrt | VRT file (vertical format) of MeMo corpus v1.1 for indexing in Corpus Workbench (CWB). Format: One token per line delimited by \<corpus>, \<text>, and \<sentence> XML elements. The XML elements contain attributes with metadata. The tokens are annotated with various categories separated by tabs. For more information about the metadata, see the metadata excel file. For more information about the token annotations, see below. |
| MeMo-corpus-metadata-v1.1-2023-06-20.xlsx | Excel file with metadata about the novels in the corpus. See the "info" tab for information about the metadata categories. |
**Token annotations and metadata in VRT file**
There are nine columns of tokens and annotations in the corpus VRT file:
| Column 1 | Column 2 | Column 3 | Column 4 | Column 5 | Column 6 | Column 7 | Column 8 | Column 9 |
| :------- | :--------- | :--------- | :------------- | :------------------- | :--------------- | :--------------- | :--------------- | :------- |
| Token | Normalized | Lemma form | Part of speech | Word no. in sentence | Word no. in line | Word no. in book | Line no. on page | Page no. |
For information about the metadata also contained in the VRT file, se the file MeMo-corpus-metadata-v1.1-2023-06-20.xlsx.
**References**
Bjerring-Hansen, Jens, et al. "Mending Fractured Texts. A heuristic procedure for correcting OCR data." (2022). <https://ceur-ws.org/Vol-3232/paper14.pdf>
**Data Statement**
## 1. Header
1. Dataset Title
MeMo Corpus
2. Dataset Curator(s) [name, affiliation]
Jens Bjerring-Hansen, University of Copenhagen; Philip Diderichsen, University of Copenhagen; Dorte Haltrup Hansen, University of Copenhagen
3. Dataset Version [version, date]
Version 1.1, August 15, 2023
4. Dataset Citation and, if available,
####
5. DOI Data Statement
####
6. Author(s) [name, affiliation]
Jens Bjerring-Hansen, University of Copenhagen; Philip Diderichsen, University of Copenhagen
7. Data Statement Version [version, date]
Version 1, September 25, 2023
8. Data Statement Citation
####
## 2. Executive summary
The MeMo corpus is established to investigate literary and cultural change in a seminal epoch of Scandinavian cultural and social history (known as 'the modern breakthrough') using natural language processing and other computational methods. The corpus consists of original novels by Norwegian and Danish authors printed in Denmark in the period 1870-99. It includes 858 volumes, totaling 4.5 million sentences and 65 million words.
## 3. Text characteristics
The corpus consists of novels, i.e. long works of narrative fiction, usually written in prose and published as a book. The novels contain both dialogue and description. As instances of imaginative literature they are infused with ambiguity, interpretational confounding, rhetorical sophistication, and narrative layerings between author, narrator, and characters.
The cultural diversity of the texts in the corpus is pronounced. From a genre perspective, we have contemporary novels as well as historical novels and other forms of genre fiction such as romance, crime, and war stories (cf. Bjerring-Hansen and Rasmussen, 2023). And from an aesthetic perspective we have both avant-garde forms of realism, including instances of naturalism and impressionism, and more traditional prose with a preference for abstract or generalized over concrete specification (cf. Bjerring-Hansen and Wilkens, 2023).
Bjerring-Hansen, Jens, and Sebastian Ørntoft Rasmussen. 2023. “Litteratursociologi og kvantitative litteraturstudier Den historiske roman i det moderne gennembrud som case”. In Passage 89: 171–189.
Bjerring-Hansen, Jens, and Matt Wilkens. 2023. “Deep distant reading: The rise of realism in Scandinavian literature as a case study”. Orbis Litterarum. [doi:10.1111/oli.12396](https://doi.org/10.1111/oli.12396)
## 4. Curation Rationale
The MeMo Corpus was created as the basis for a research project, _MeMo – Measuring Modernity: Literary and Social Change in Scandinavia 1870-1900_, investigating how processes of social change in late nineteenth century Scandinavia were reflected and discussed in the novels from the period (project page: [https://nors.ku.dk/english/research/projects/measuring-modernity/](https://nors.ku.dk/english/research/projects/measuring-modernity/)). As opposed to traditional historiography on the period, which has focused on selected texts by a few prominent, male authors, our digital corpus, with rich metadata on texts and authors, allows for the capturing of robust literary and sociological trends and for new insights into the processes of modernization in this formative period in the literary and social history of Scandinavia. To this corpus we thus ask questions such as: How did this breakthrough of new ways of thinking and writing actually unfold? Who were the actors? And to what extent did newness relate to literature at large?
Also, the corpus acts as the empirical foundation of an interrelated methodological project, _Mining the Meaning_, which aims to develop state-of-the-art computational semantic methods and training large language models towards written late 19th-century Danish and Norwegian (project page: [https://mime-memo.github.io/](https://mime-memo.github.io/)).
Included in the corpus are all original (i.e. newly written) novels by Danish and Norwegian authors published in Denmark 1870-99. The list of texts was compiled on the basis of _Dansk Bogfortegnelse _(a continuous list of books published in Denmark since 1841; from 1861 published annually) supplemented with literary handbooks and special bibliographies.
Not included (mainly due to pragmatic reasons and for the sake of coherence) in the corpus are:
* reprints
* translations
* serializations (i.e. serialized novels from newspapers and magazines)
* diasporic literature (i.e. novels by Danish emigrant authors in the U.S.)
Around 20% of the novels are produced by female authors. Thus, highlighting and exploring the often overlooked female literary production of the period is a distinctive ambition of the corpus and the explorations based on it.
## 5. Language Varieties
The language of the novels in the corpus is late nineteenth century Danish (BCP-47: da). On the whole, we are dealing with a more or less linguistically coherent body of texts. However, the following circumstances must be acknowledged:
* The texts contain a pronounced spelling variation, partly on an individual level, partly explained by an ongoing orthographic standardization, which is most clearly expressed in the Spelling Reform of 1892. Here, forms such as 'Kjøbenhavn' and 'Familje' became 'København' and 'Familie'.
* Some books are written in dialect (e.g. Jutlandic or West Norwegian) or contain dialectal features to create psychological individualism in the dialogue.
* Approximately 16% of the books are written by Norwegian authors. In this regard it should be noted that, until 1907, written Norwegian was practically identical to written Danish. ‘Norvagisms’ (i.e. distinct Norwegian words, not used by Danes) do appear.
## 6. Preprocessing and data formatting
**OCR scans**: The book volumes were scanned with optical character recognition (OCR) by the Royal Danish Library’s Digitization on Demand (DoD) team. The data were delivered as full volume PDF files with the OCR’ed text as an invisible searchable, copyable text layer, as full volume text files, and as single page text files (one text file per page for each volume).
**OCR correction**: The text files were automatically post-corrected for OCR errors. This involved two different processes, one for texts originally typeset in Antikva (Roman) typefaces, one in Fraktur (Gothic) typefaces. The Antikva files were corrected using a set of hand-crafted substitution patterns, with look-up in the dictionary Sprogteknologisk Ordbase, STO (Eng. ‘Word database for language technology’). The Fraktur files were corrected using a correction procedure involving a combination of spelling correction, hand-crafted pattern substitution, and improved OCR using the pretrained “Fraktur” Tesseract data plus an alternative OCR layer from the pretrained “dan” Tesseract data, which was used as a corrective to problems with the Danish characters “æ” and “ø” in particular. This procedure improved the word error rate of the Fraktur data from 10.46% to 2.84% (cf. Bjerring-Hansen et al. 2022).
Bjerring-Hansen, Jens, Philip Diderichsen, Dorte Haltrup Hansen, and Ross D. Kristensen-McLachlan. 2022. “Mending fractured texts. A heuristic procedure for correcting OCR.” Proceedings of the 6th Digital Humanities in the Nordic and Baltic Countries Conference, Uppsala, Sweden, March 15-18, 2022 (DHNB 2022): 177–186.
**Token-level annotation**: The corrected data were annotated with grammatical information using the pipeline orchestration tool Text Tonsorium available at [https://cst.dk/texton/](https://cst.dk/texton/), provided by the Danish CLARIN node. The particular pipeline used included the LaPos part of speech tagger, the CSTLemma lemmatizer, and an implementation of the Brill tagger. Grammatical information included lemma and part of speech, plus sentence and paragraph segmentation (which are of course not strictly speaking token-level annotations). In addition to the grammatical annotations, convenience annotations with various counters were also added: word number in sentence, word number on line, word number in book volume, line number on page, page number in book volume.
**Text normalization**: After OCR correction, all texts were normalized to modern Danish spelling using hand-crafted substitution patterns and lookup in STO (see above). Nouns were lower cased, “aa” changed to “å”, and frequent character patterns changed to obey modern Danish orthography.
**VRT transformation**: After annotation with token-level categories and metadata, the data were transformed to a VRT file (vertical format) for indexing in Corpus Workbench (CWB). Format: One token per line delimited by <corpus>, <text>, and <sentence> XML elements. The XML elements contain attributes with metadata. The tokens are annotated with the above-mentioned token-level annotations, separated by tabs. For more information about the metadata, see below.
The data are available as:
* OCR-corrected full volume text files
* Normalized full volume versions of these text files
* A single VRT file containing the whole corpus.
## 7. Limitations
A standard limitation of data preprocessed and annotated using automatic natural language processing tools and procedures is that the results are not perfect. Thus, basically all the layers of the data can be assumed to be flawed:
* Text data: The raw texts come from OCR scans of the physical book volumes. This process is not perfect, and although we have taken steps to mitigate errors, the basic text layer of the data can still be expected to have OCR errors (or wrong corrections) in 2-3% of tokens.
* Normalized data: The normalization to modern Danish spelling as such should not be expected to be perfect either. We currently do not have estimates of the error rate in the normalized data.
* Grammatical annotations: These are also added using automatic tools which cannot be expected to yield perfect results. We currently do not have estimates of error rates in the grammatical annotations.
* Metadata: The metadata are hand-curated by literary scholars and should be close to perfect. However, the occasional human error can of course not be ruled out.
## 8. Metadata
The metadata was curated with the help of students (Lasse Stein Holst, Lene Thanning Andersen, and Kirstine Nielsen Degn) on the basis of _Dansk Bogfortegnelse_ (1861-), [https://www.litteraturpriser.dk/](https://www.litteraturpriser.dk/), Ehrencron-Müller: _Anonym- og Pseudonym-Lexikon_ (1940) as well as additional literary and bibliographical handbooks.
Among the metadata categories are the following:
* file_id
* filename
* [author] firstname
* [author] surname
* [author] pseudonym
* [author] gender [m/f/unknown]
* [author] nationality [da/no/unknown]
* title
* subtitle
* volume
* year [of publication]
* pages [in total]
* illustrations [y/n]
* typeface [gothic/roman]
* publisher
* price
## 9. Disclosure and Ethical Review
Funding for the creation and curation is supplied by The Carlsberg Foundation through a Young Researcher Fellowship awarded to Jens Bjerring-Hansen, University of Copenhagen.
In terms of data management, the project data (novels from 1870-1900) consist of imaginative texts by non-living authors. The texts are out-of-copyright. From a GDPR perspective, the biographical, bibliographical and demographic data are historical as well as non-sensitive.
|
SLB13/Classifications | ---
license: apache-2.0
---
|
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064280 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-2.7b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
maharshipandya/spotify-tracks-dataset | ---
license: bsd
task_categories:
- feature-extraction
- tabular-classification
- tabular-regression
language:
- en
tags:
- music
- art
pretty_name: Spotify Tracks Dataset
size_categories:
- 100K<n<1M
---
# Content
This is a dataset of Spotify tracks over a range of **125** different genres. Each track has some audio features associated with it. The data is in `CSV` format which is tabular and can be loaded quickly.
# Usage
The dataset can be used for:
- Building a **Recommendation System** based on some user input or preference
- **Classification** purposes based on audio features and available genres
- Any other application that you can think of. Feel free to discuss!
# Column Description
- **track_id**: The Spotify ID for the track
- **artists**: The artists' names who performed the track. If there is more than one artist, they are separated by a `;`
- **album_name**: The album name in which the track appears
- **track_name**: Name of the track
- **popularity**: **The popularity of a track is a value between 0 and 100, with 100 being the most popular**. The popularity is calculated by algorithm and is based, in the most part, on the total number of plays the track has had and how recent those plays are. Generally speaking, songs that are being played a lot now will have a higher popularity than songs that were played a lot in the past. Duplicate tracks (e.g. the same track from a single and an album) are rated independently. Artist and album popularity is derived mathematically from track popularity.
- **duration_ms**: The track length in milliseconds
- **explicit**: Whether or not the track has explicit lyrics (true = yes it does; false = no it does not OR unknown)
- **danceability**: Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable
- **energy**: Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale
- **key**: The key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. `0 = C`, `1 = C♯/D♭`, `2 = D`, and so on. If no key was detected, the value is -1
- **loudness**: The overall loudness of a track in decibels (dB)
- **mode**: Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0
- **speechiness**: Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks
- **acousticness**: A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic
- **instrumentalness**: Predicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content
- **liveness**: Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live
- **valence**: A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry)
- **tempo**: The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration
- **time_signature**: An estimated time signature. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure). The time signature ranges from 3 to 7 indicating time signatures of `3/4`, to `7/4`.
- **track_genre**: The genre in which the track belongs
# Sources and Methodology
The data was collected and cleaned using Spotify's Web API and Python. |
HamdanXI/arb-eng-parallel-100k | ---
dataset_info:
features:
- name: arabic
dtype: string
- name: english
dtype: string
splits:
- name: train
num_bytes: 42932584.23270529
num_examples: 100000
download_size: 24089878
dataset_size: 42932584.23270529
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ACCORD-NLP/CODE-ACCORD-Entities | ---
language:
- en
---
# CODE-ACCORD: A Corpus of Building Regulatory Data for Rule Generation towards Automatic Compliance Checking
The CODE-ACCORD corpus contains annotated sentences from the building regulations of England and Finland and has been developed as part of the Horizon European project for Automated Compliance Checks for Construction, Renovation or Demolition Works ([ACCORD](https://accordproject.eu/)). The corpus is in English, and it consists of both the English Building Regulations and the English translation of the Finnish National Building Code.
## Data Annotation
CODE-ACCORD is mainly focused on extracting information from text to support rule generation. There are two key types of information found in the text: named entities and relations, which are essential for comprehending the ideas conveyed in natural language. Hence, this dataset primarily focused on annotating entities and relations.
Four categories were considered for entity annotation:
(1) object,
(2) property,
(3) quality and
(4) value.
The relations annotations span in ten categories:
(1) selection,
(2) necessity,
(3) part-of,
(4) not-part-of,
(5) greater,
(6) greater-equal,
(7) equal,
(8) less-equal,
(9) less and
(10) none.
Please refer to our [Annotation Stragety](https://github.com/Accord-Project/CODE-ACCORD/blob/main/annotated_data/Annotation_Strategy_V1.0.0.pdf) for more details about the categories and sample annotations.
### Data Splits
Both entity and relation-annotated data consist of two data splits named *train* and *test*. The train split forms 80% of the full dataset, while the remaining 20% belongs to the test split.
### Entities
The format of an entity data file is as follows:
| Attribute | Description |
|-------------------|--------------------------------------------------------------------------------|
| example_id | Unique ID assigned for each sentence |
| content | Original textual content of the sentence |
| processed_content | Tokenised (using NLTK's word_tokenize package) textual content of the sentence |
| label | Entity labelled sequence in IOB format |
| metadata | Additional information of sentence (i.e. original approved document from which the sentence is extracted) |
#### Using Data
The train and test splits of entity-annotated data can be loaded into Pandas DataFrames using the following Python code.
```python
from datasets import Dataset
from datasets import load_dataset
train = Dataset.to_pandas(load_dataset('ACCORD-NLP/CODE-ACCORD-Entities', split='train'))
test = Dataset.to_pandas(load_dataset('ACCORD-NLP/CODE-ACCORD-Entities', split='test'))
```
### Relations
The format of a relation data file is as follows:
| Attribute | Description |
|-----------------|--------------------------------------------------------------------------------|
| example_id | Unique ID assigned for each sentence |
| content | Original textual content of the sentence |
| metadata | Additional information of sentence (i.e. original approved document from which the sentence is extracted) |
| tagged_sentence | Sentence with tagged entity pair |
| relation_type | Category of the relation in between the tagged entity pair |
#### Using Data
The train and test splits of relation-annotated data can be loaded into Pandas DataFrames using the following Python code.
```python
from datasets import Dataset
from datasets import load_dataset
train = Dataset.to_pandas(load_dataset('ACCORD-NLP/CODE-ACCORD-Relations', split='train'))
test = Dataset.to_pandas(load_dataset('ACCORD-NLP/CODE-ACCORD-Relations', split='test'))
```
## Citation
More details about data annotation, statistics, and distribution are available in the following paper.
```
@article{hettiarachchi2024code,
title={{CODE-ACCORD}: A Corpus of Building Regulatory Data for Rule Generation towards Automatic Compliance Checking},
author={Hettiarachchi, Hansi and Dridi, Amna and Gaber, Mohamed Medhat and Parsafard, Pouyan and Bocaneala, Nicoleta and Breitenfelder, Katja and Costa, Gon{\c{c}}al and Hedblom, Maria and Juganaru-Mathieu, Mihaela and Mecharnia, Thamer and others},
journal={arXiv preprint arXiv:2403.02231},
year={2024}
}
```
|
isaquecerqueira/millan_sms_traffic | ---
license: odbl
---
# Milan SMS Traffic Dataset
This dataset contains information about hourly sms traffic in Milan between 2013-11-01 and 2014-01-01. |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v3-math-468e93-2011366584 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v3
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-13b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v3
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v3
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
liuyanchen1015/MULTI_VALUE_sst2_drop_copula_be_AP | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 7899
num_examples: 57
- name: test
num_bytes: 19940
num_examples: 145
- name: train
num_bytes: 306146
num_examples: 2994
download_size: 177471
dataset_size: 333985
---
# Dataset Card for "MULTI_VALUE_sst2_drop_copula_be_AP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Mihaiii__Covasna-0.1 | ---
pretty_name: Evaluation run of Mihaiii/Covasna-0.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Mihaiii/Covasna-0.1](https://huggingface.co/Mihaiii/Covasna-0.1) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Mihaiii__Covasna-0.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-26T00:32:53.254047](https://huggingface.co/datasets/open-llm-leaderboard/details_Mihaiii__Covasna-0.1/blob/main/results_2024-03-26T00-32-53.254047.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6061216034885455,\n\
\ \"acc_stderr\": 0.03258769363786083,\n \"acc_norm\": 0.6181668409316191,\n\
\ \"acc_norm_stderr\": 0.0334232160706306,\n \"mc1\": 0.3427172582619339,\n\
\ \"mc1_stderr\": 0.016614949385347036,\n \"mc2\": 0.5263875042990617,\n\
\ \"mc2_stderr\": 0.015338272847298986\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.4616040955631399,\n \"acc_stderr\": 0.014568245550296366,\n\
\ \"acc_norm\": 0.4880546075085324,\n \"acc_norm_stderr\": 0.014607220340597167\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5108544114718183,\n\
\ \"acc_stderr\": 0.0049886054982739,\n \"acc_norm\": 0.7006572395937064,\n\
\ \"acc_norm_stderr\": 0.004570342034463293\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5555555555555556,\n\
\ \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.5555555555555556,\n\
\ \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.038234289699266046,\n\
\ \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.038234289699266046\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n\
\ \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \
\ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6528301886792452,\n \"acc_stderr\": 0.029300101705549655,\n\
\ \"acc_norm\": 0.6528301886792452,\n \"acc_norm_stderr\": 0.029300101705549655\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n\
\ \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n\
\ \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"\
acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5375722543352601,\n\
\ \"acc_stderr\": 0.0380168510452446,\n \"acc_norm\": 0.5375722543352601,\n\
\ \"acc_norm_stderr\": 0.0380168510452446\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.04724007352383887,\n\
\ \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.04724007352383887\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n\
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.625531914893617,\n \"acc_stderr\": 0.03163910665367291,\n\
\ \"acc_norm\": 0.625531914893617,\n \"acc_norm_stderr\": 0.03163910665367291\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.39473684210526316,\n\
\ \"acc_stderr\": 0.045981880578165414,\n \"acc_norm\": 0.39473684210526316,\n\
\ \"acc_norm_stderr\": 0.045981880578165414\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3994708994708995,\n \"acc_stderr\": 0.025225450284067877,\n \"\
acc_norm\": 0.3994708994708995,\n \"acc_norm_stderr\": 0.025225450284067877\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n\
\ \"acc_stderr\": 0.04375888492727061,\n \"acc_norm\": 0.3968253968253968,\n\
\ \"acc_norm_stderr\": 0.04375888492727061\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7548387096774194,\n\
\ \"acc_stderr\": 0.024472243840895525,\n \"acc_norm\": 0.7548387096774194,\n\
\ \"acc_norm_stderr\": 0.024472243840895525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4433497536945813,\n \"acc_stderr\": 0.03495334582162933,\n\
\ \"acc_norm\": 0.4433497536945813,\n \"acc_norm_stderr\": 0.03495334582162933\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\"\
: 0.67,\n \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7676767676767676,\n \"acc_stderr\": 0.030088629490217487,\n \"\
acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.030088629490217487\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8808290155440415,\n \"acc_stderr\": 0.02338193534812143,\n\
\ \"acc_norm\": 0.8808290155440415,\n \"acc_norm_stderr\": 0.02338193534812143\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6487179487179487,\n \"acc_stderr\": 0.024203665177902803,\n\
\ \"acc_norm\": 0.6487179487179487,\n \"acc_norm_stderr\": 0.024203665177902803\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3111111111111111,\n \"acc_stderr\": 0.028226446749683522,\n \
\ \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.028226446749683522\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.0302839955258844,\n \
\ \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.0302839955258844\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33774834437086093,\n \"acc_stderr\": 0.0386155754625517,\n \"\
acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.0386155754625517\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8201834862385321,\n \"acc_stderr\": 0.016465345467391545,\n \"\
acc_norm\": 0.8201834862385321,\n \"acc_norm_stderr\": 0.016465345467391545\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5462962962962963,\n \"acc_stderr\": 0.033953227263757976,\n \"\
acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.033953227263757976\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7990196078431373,\n \"acc_stderr\": 0.028125972265654373,\n \"\
acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.028125972265654373\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8016877637130801,\n \"acc_stderr\": 0.025955020841621112,\n \
\ \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.025955020841621112\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7488789237668162,\n\
\ \"acc_stderr\": 0.029105220833224615,\n \"acc_norm\": 0.7488789237668162,\n\
\ \"acc_norm_stderr\": 0.029105220833224615\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7520661157024794,\n \"acc_stderr\": 0.039418975265163046,\n \"\
acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.039418975265163046\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.042365112580946315,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.042365112580946315\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n\
\ \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531769,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531769\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406936,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406936\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7713920817369093,\n\
\ \"acc_stderr\": 0.015016884698539873,\n \"acc_norm\": 0.7713920817369093,\n\
\ \"acc_norm_stderr\": 0.015016884698539873\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.661849710982659,\n \"acc_stderr\": 0.02546977014940017,\n\
\ \"acc_norm\": 0.661849710982659,\n \"acc_norm_stderr\": 0.02546977014940017\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39217877094972065,\n\
\ \"acc_stderr\": 0.016329061073207453,\n \"acc_norm\": 0.39217877094972065,\n\
\ \"acc_norm_stderr\": 0.016329061073207453\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7026143790849673,\n \"acc_stderr\": 0.02617390850671858,\n\
\ \"acc_norm\": 0.7026143790849673,\n \"acc_norm_stderr\": 0.02617390850671858\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6913183279742765,\n\
\ \"acc_stderr\": 0.02623696588115326,\n \"acc_norm\": 0.6913183279742765,\n\
\ \"acc_norm_stderr\": 0.02623696588115326\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.025630824975621344,\n\
\ \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.025630824975621344\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4326241134751773,\n \"acc_stderr\": 0.029555454236778845,\n \
\ \"acc_norm\": 0.4326241134751773,\n \"acc_norm_stderr\": 0.029555454236778845\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4589308996088657,\n\
\ \"acc_stderr\": 0.012727084826799795,\n \"acc_norm\": 0.4589308996088657,\n\
\ \"acc_norm_stderr\": 0.012727084826799795\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.030161911930767102,\n\
\ \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.030161911930767102\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.684640522875817,\n \"acc_stderr\": 0.018798086284886877,\n \
\ \"acc_norm\": 0.684640522875817,\n \"acc_norm_stderr\": 0.018798086284886877\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302505,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302505\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6571428571428571,\n \"acc_stderr\": 0.030387262919547724,\n\
\ \"acc_norm\": 0.6571428571428571,\n \"acc_norm_stderr\": 0.030387262919547724\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8159203980099502,\n\
\ \"acc_stderr\": 0.027403859410786862,\n \"acc_norm\": 0.8159203980099502,\n\
\ \"acc_norm_stderr\": 0.027403859410786862\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n\
\ \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n\
\ \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7660818713450293,\n \"acc_stderr\": 0.03246721765117825,\n\
\ \"acc_norm\": 0.7660818713450293,\n \"acc_norm_stderr\": 0.03246721765117825\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3427172582619339,\n\
\ \"mc1_stderr\": 0.016614949385347036,\n \"mc2\": 0.5263875042990617,\n\
\ \"mc2_stderr\": 0.015338272847298986\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7079715864246251,\n \"acc_stderr\": 0.012779198491754025\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.009855951478392721,\n \
\ \"acc_stderr\": 0.0027210765770416608\n }\n}\n```"
repo_url: https://huggingface.co/Mihaiii/Covasna-0.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|arc:challenge|25_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|gsm8k|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hellaswag|10_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-26T00-32-53.254047.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-26T00-32-53.254047.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- '**/details_harness|winogrande|5_2024-03-26T00-32-53.254047.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-26T00-32-53.254047.parquet'
- config_name: results
data_files:
- split: 2024_03_26T00_32_53.254047
path:
- results_2024-03-26T00-32-53.254047.parquet
- split: latest
path:
- results_2024-03-26T00-32-53.254047.parquet
---
# Dataset Card for Evaluation run of Mihaiii/Covasna-0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Mihaiii/Covasna-0.1](https://huggingface.co/Mihaiii/Covasna-0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Mihaiii__Covasna-0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-26T00:32:53.254047](https://huggingface.co/datasets/open-llm-leaderboard/details_Mihaiii__Covasna-0.1/blob/main/results_2024-03-26T00-32-53.254047.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6061216034885455,
"acc_stderr": 0.03258769363786083,
"acc_norm": 0.6181668409316191,
"acc_norm_stderr": 0.0334232160706306,
"mc1": 0.3427172582619339,
"mc1_stderr": 0.016614949385347036,
"mc2": 0.5263875042990617,
"mc2_stderr": 0.015338272847298986
},
"harness|arc:challenge|25": {
"acc": 0.4616040955631399,
"acc_stderr": 0.014568245550296366,
"acc_norm": 0.4880546075085324,
"acc_norm_stderr": 0.014607220340597167
},
"harness|hellaswag|10": {
"acc": 0.5108544114718183,
"acc_stderr": 0.0049886054982739,
"acc_norm": 0.7006572395937064,
"acc_norm_stderr": 0.004570342034463293
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.038234289699266046,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.038234289699266046
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6528301886792452,
"acc_stderr": 0.029300101705549655,
"acc_norm": 0.6528301886792452,
"acc_norm_stderr": 0.029300101705549655
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7361111111111112,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.7361111111111112,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5375722543352601,
"acc_stderr": 0.0380168510452446,
"acc_norm": 0.5375722543352601,
"acc_norm_stderr": 0.0380168510452446
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.04724007352383887,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.04724007352383887
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.625531914893617,
"acc_stderr": 0.03163910665367291,
"acc_norm": 0.625531914893617,
"acc_norm_stderr": 0.03163910665367291
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.39473684210526316,
"acc_stderr": 0.045981880578165414,
"acc_norm": 0.39473684210526316,
"acc_norm_stderr": 0.045981880578165414
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3994708994708995,
"acc_stderr": 0.025225450284067877,
"acc_norm": 0.3994708994708995,
"acc_norm_stderr": 0.025225450284067877
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.04375888492727061,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.04375888492727061
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7548387096774194,
"acc_stderr": 0.024472243840895525,
"acc_norm": 0.7548387096774194,
"acc_norm_stderr": 0.024472243840895525
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4433497536945813,
"acc_stderr": 0.03495334582162933,
"acc_norm": 0.4433497536945813,
"acc_norm_stderr": 0.03495334582162933
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252609,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252609
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.030088629490217487,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.030088629490217487
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8808290155440415,
"acc_stderr": 0.02338193534812143,
"acc_norm": 0.8808290155440415,
"acc_norm_stderr": 0.02338193534812143
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6487179487179487,
"acc_stderr": 0.024203665177902803,
"acc_norm": 0.6487179487179487,
"acc_norm_stderr": 0.024203665177902803
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3111111111111111,
"acc_stderr": 0.028226446749683522,
"acc_norm": 0.3111111111111111,
"acc_norm_stderr": 0.028226446749683522
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.680672268907563,
"acc_stderr": 0.0302839955258844,
"acc_norm": 0.680672268907563,
"acc_norm_stderr": 0.0302839955258844
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.0386155754625517,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.0386155754625517
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8201834862385321,
"acc_stderr": 0.016465345467391545,
"acc_norm": 0.8201834862385321,
"acc_norm_stderr": 0.016465345467391545
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.033953227263757976,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.033953227263757976
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7990196078431373,
"acc_stderr": 0.028125972265654373,
"acc_norm": 0.7990196078431373,
"acc_norm_stderr": 0.028125972265654373
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.025955020841621112,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.025955020841621112
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7488789237668162,
"acc_stderr": 0.029105220833224615,
"acc_norm": 0.7488789237668162,
"acc_norm_stderr": 0.029105220833224615
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7520661157024794,
"acc_stderr": 0.039418975265163046,
"acc_norm": 0.7520661157024794,
"acc_norm_stderr": 0.039418975265163046
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.042365112580946315,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.042365112580946315
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.03989139859531769,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.03989139859531769
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406936,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406936
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7713920817369093,
"acc_stderr": 0.015016884698539873,
"acc_norm": 0.7713920817369093,
"acc_norm_stderr": 0.015016884698539873
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.661849710982659,
"acc_stderr": 0.02546977014940017,
"acc_norm": 0.661849710982659,
"acc_norm_stderr": 0.02546977014940017
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39217877094972065,
"acc_stderr": 0.016329061073207453,
"acc_norm": 0.39217877094972065,
"acc_norm_stderr": 0.016329061073207453
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7026143790849673,
"acc_stderr": 0.02617390850671858,
"acc_norm": 0.7026143790849673,
"acc_norm_stderr": 0.02617390850671858
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6913183279742765,
"acc_stderr": 0.02623696588115326,
"acc_norm": 0.6913183279742765,
"acc_norm_stderr": 0.02623696588115326
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.025630824975621344,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.025630824975621344
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4326241134751773,
"acc_stderr": 0.029555454236778845,
"acc_norm": 0.4326241134751773,
"acc_norm_stderr": 0.029555454236778845
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4589308996088657,
"acc_stderr": 0.012727084826799795,
"acc_norm": 0.4589308996088657,
"acc_norm_stderr": 0.012727084826799795
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.030161911930767102,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.030161911930767102
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.684640522875817,
"acc_stderr": 0.018798086284886877,
"acc_norm": 0.684640522875817,
"acc_norm_stderr": 0.018798086284886877
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302505,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302505
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6571428571428571,
"acc_stderr": 0.030387262919547724,
"acc_norm": 0.6571428571428571,
"acc_norm_stderr": 0.030387262919547724
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8159203980099502,
"acc_stderr": 0.027403859410786862,
"acc_norm": 0.8159203980099502,
"acc_norm_stderr": 0.027403859410786862
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7660818713450293,
"acc_stderr": 0.03246721765117825,
"acc_norm": 0.7660818713450293,
"acc_norm_stderr": 0.03246721765117825
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3427172582619339,
"mc1_stderr": 0.016614949385347036,
"mc2": 0.5263875042990617,
"mc2_stderr": 0.015338272847298986
},
"harness|winogrande|5": {
"acc": 0.7079715864246251,
"acc_stderr": 0.012779198491754025
},
"harness|gsm8k|5": {
"acc": 0.009855951478392721,
"acc_stderr": 0.0027210765770416608
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/details_aboros98__merlin1 | ---
pretty_name: Evaluation run of aboros98/merlin1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [aboros98/merlin1](https://huggingface.co/aboros98/merlin1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_aboros98__merlin1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-12T17:04:34.752225](https://huggingface.co/datasets/open-llm-leaderboard/details_aboros98__merlin1/blob/main/results_2024-03-12T17-04-34.752225.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5803631365007319,\n\
\ \"acc_stderr\": 0.03379683671031974,\n \"acc_norm\": 0.5819117908480674,\n\
\ \"acc_norm_stderr\": 0.034490432887019654,\n \"mc1\": 0.3292533659730722,\n\
\ \"mc1_stderr\": 0.016451264440068235,\n \"mc2\": 0.4834841796846446,\n\
\ \"mc2_stderr\": 0.01517611623380428\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5802047781569966,\n \"acc_stderr\": 0.014422181226303028,\n\
\ \"acc_norm\": 0.606655290102389,\n \"acc_norm_stderr\": 0.014275101465693028\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5592511451902011,\n\
\ \"acc_stderr\": 0.004954622308738999,\n \"acc_norm\": 0.7454690300736905,\n\
\ \"acc_norm_stderr\": 0.004347070019527476\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5460526315789473,\n \"acc_stderr\": 0.04051646342874142,\n\
\ \"acc_norm\": 0.5460526315789473,\n \"acc_norm_stderr\": 0.04051646342874142\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n\
\ \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5735849056603773,\n \"acc_stderr\": 0.030437794342983052,\n\
\ \"acc_norm\": 0.5735849056603773,\n \"acc_norm_stderr\": 0.030437794342983052\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6180555555555556,\n\
\ \"acc_stderr\": 0.040629907841466674,\n \"acc_norm\": 0.6180555555555556,\n\
\ \"acc_norm_stderr\": 0.040629907841466674\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\"\
: 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.630057803468208,\n\
\ \"acc_stderr\": 0.036812296333943194,\n \"acc_norm\": 0.630057803468208,\n\
\ \"acc_norm_stderr\": 0.036812296333943194\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107224,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107224\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.04461960433384739,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.04461960433384739\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.502127659574468,\n \"acc_stderr\": 0.03268572658667492,\n\
\ \"acc_norm\": 0.502127659574468,\n \"acc_norm_stderr\": 0.03268572658667492\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.35964912280701755,\n\
\ \"acc_stderr\": 0.045144961328736334,\n \"acc_norm\": 0.35964912280701755,\n\
\ \"acc_norm_stderr\": 0.045144961328736334\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192118,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192118\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4603174603174603,\n \"acc_stderr\": 0.025670080636909193,\n \"\
acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.025670080636909193\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.36507936507936506,\n\
\ \"acc_stderr\": 0.04306241259127153,\n \"acc_norm\": 0.36507936507936506,\n\
\ \"acc_norm_stderr\": 0.04306241259127153\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.6870967741935484,\n \"acc_stderr\": 0.026377567028645854,\n \"\
acc_norm\": 0.6870967741935484,\n \"acc_norm_stderr\": 0.026377567028645854\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4729064039408867,\n \"acc_stderr\": 0.03512819077876106,\n \"\
acc_norm\": 0.4729064039408867,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\"\
: 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.696969696969697,\n \"acc_stderr\": 0.03588624800091707,\n\
\ \"acc_norm\": 0.696969696969697,\n \"acc_norm_stderr\": 0.03588624800091707\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7424242424242424,\n \"acc_stderr\": 0.03115626951964683,\n \"\
acc_norm\": 0.7424242424242424,\n \"acc_norm_stderr\": 0.03115626951964683\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7927461139896373,\n \"acc_stderr\": 0.029252823291803624,\n\
\ \"acc_norm\": 0.7927461139896373,\n \"acc_norm_stderr\": 0.029252823291803624\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5717948717948718,\n \"acc_stderr\": 0.025088301454694834,\n\
\ \"acc_norm\": 0.5717948717948718,\n \"acc_norm_stderr\": 0.025088301454694834\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34444444444444444,\n \"acc_stderr\": 0.02897264888484427,\n \
\ \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.02897264888484427\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6386554621848739,\n \"acc_stderr\": 0.03120469122515001,\n \
\ \"acc_norm\": 0.6386554621848739,\n \"acc_norm_stderr\": 0.03120469122515001\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8018348623853211,\n \"acc_stderr\": 0.017090573804217905,\n \"\
acc_norm\": 0.8018348623853211,\n \"acc_norm_stderr\": 0.017090573804217905\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.49074074074074076,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6813725490196079,\n \"acc_stderr\": 0.03270287181482081,\n \"\
acc_norm\": 0.6813725490196079,\n \"acc_norm_stderr\": 0.03270287181482081\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7468354430379747,\n \"acc_stderr\": 0.02830465794303529,\n \
\ \"acc_norm\": 0.7468354430379747,\n \"acc_norm_stderr\": 0.02830465794303529\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6098654708520179,\n\
\ \"acc_stderr\": 0.03273766725459156,\n \"acc_norm\": 0.6098654708520179,\n\
\ \"acc_norm_stderr\": 0.03273766725459156\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6870229007633588,\n \"acc_stderr\": 0.04066962905677697,\n\
\ \"acc_norm\": 0.6870229007633588,\n \"acc_norm_stderr\": 0.04066962905677697\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7272727272727273,\n \"acc_stderr\": 0.04065578140908705,\n \"\
acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04065578140908705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.04330043749650742,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.04330043749650742\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7239263803680982,\n \"acc_stderr\": 0.035123852837050475,\n\
\ \"acc_norm\": 0.7239263803680982,\n \"acc_norm_stderr\": 0.035123852837050475\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n\
\ \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8376068376068376,\n\
\ \"acc_stderr\": 0.02416161812798774,\n \"acc_norm\": 0.8376068376068376,\n\
\ \"acc_norm_stderr\": 0.02416161812798774\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.685823754789272,\n\
\ \"acc_stderr\": 0.016599291735884904,\n \"acc_norm\": 0.685823754789272,\n\
\ \"acc_norm_stderr\": 0.016599291735884904\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.630057803468208,\n \"acc_stderr\": 0.02599247202930639,\n\
\ \"acc_norm\": 0.630057803468208,\n \"acc_norm_stderr\": 0.02599247202930639\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2536312849162011,\n\
\ \"acc_stderr\": 0.014551553659369922,\n \"acc_norm\": 0.2536312849162011,\n\
\ \"acc_norm_stderr\": 0.014551553659369922\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6339869281045751,\n \"acc_stderr\": 0.027582811415159614,\n\
\ \"acc_norm\": 0.6339869281045751,\n \"acc_norm_stderr\": 0.027582811415159614\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6205787781350482,\n\
\ \"acc_stderr\": 0.027559949802347817,\n \"acc_norm\": 0.6205787781350482,\n\
\ \"acc_norm_stderr\": 0.027559949802347817\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5987654320987654,\n \"acc_stderr\": 0.0272725828498398,\n\
\ \"acc_norm\": 0.5987654320987654,\n \"acc_norm_stderr\": 0.0272725828498398\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4219858156028369,\n \"acc_stderr\": 0.029462189233370604,\n \
\ \"acc_norm\": 0.4219858156028369,\n \"acc_norm_stderr\": 0.029462189233370604\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4165580182529335,\n\
\ \"acc_stderr\": 0.012591153245057387,\n \"acc_norm\": 0.4165580182529335,\n\
\ \"acc_norm_stderr\": 0.012591153245057387\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.47794117647058826,\n \"acc_stderr\": 0.030343264224213528,\n\
\ \"acc_norm\": 0.47794117647058826,\n \"acc_norm_stderr\": 0.030343264224213528\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5522875816993464,\n \"acc_stderr\": 0.020116925347422425,\n \
\ \"acc_norm\": 0.5522875816993464,\n \"acc_norm_stderr\": 0.020116925347422425\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302505,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302505\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7020408163265306,\n \"acc_stderr\": 0.02927956741106568,\n\
\ \"acc_norm\": 0.7020408163265306,\n \"acc_norm_stderr\": 0.02927956741106568\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7860696517412935,\n\
\ \"acc_stderr\": 0.028996909693328923,\n \"acc_norm\": 0.7860696517412935,\n\
\ \"acc_norm_stderr\": 0.028996909693328923\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932264,\n \
\ \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932264\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4759036144578313,\n\
\ \"acc_stderr\": 0.038879718495972646,\n \"acc_norm\": 0.4759036144578313,\n\
\ \"acc_norm_stderr\": 0.038879718495972646\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7017543859649122,\n \"acc_stderr\": 0.03508771929824563,\n\
\ \"acc_norm\": 0.7017543859649122,\n \"acc_norm_stderr\": 0.03508771929824563\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3292533659730722,\n\
\ \"mc1_stderr\": 0.016451264440068235,\n \"mc2\": 0.4834841796846446,\n\
\ \"mc2_stderr\": 0.01517611623380428\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.749802683504341,\n \"acc_stderr\": 0.012173009642449151\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5322213798332069,\n \
\ \"acc_stderr\": 0.013743857303073781\n }\n}\n```"
repo_url: https://huggingface.co/aboros98/merlin1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|arc:challenge|25_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|gsm8k|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hellaswag|10_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-12T17-04-34.752225.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-12T17-04-34.752225.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- '**/details_harness|winogrande|5_2024-03-12T17-04-34.752225.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-12T17-04-34.752225.parquet'
- config_name: results
data_files:
- split: 2024_03_12T17_04_34.752225
path:
- results_2024-03-12T17-04-34.752225.parquet
- split: latest
path:
- results_2024-03-12T17-04-34.752225.parquet
---
# Dataset Card for Evaluation run of aboros98/merlin1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [aboros98/merlin1](https://huggingface.co/aboros98/merlin1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_aboros98__merlin1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-12T17:04:34.752225](https://huggingface.co/datasets/open-llm-leaderboard/details_aboros98__merlin1/blob/main/results_2024-03-12T17-04-34.752225.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5803631365007319,
"acc_stderr": 0.03379683671031974,
"acc_norm": 0.5819117908480674,
"acc_norm_stderr": 0.034490432887019654,
"mc1": 0.3292533659730722,
"mc1_stderr": 0.016451264440068235,
"mc2": 0.4834841796846446,
"mc2_stderr": 0.01517611623380428
},
"harness|arc:challenge|25": {
"acc": 0.5802047781569966,
"acc_stderr": 0.014422181226303028,
"acc_norm": 0.606655290102389,
"acc_norm_stderr": 0.014275101465693028
},
"harness|hellaswag|10": {
"acc": 0.5592511451902011,
"acc_stderr": 0.004954622308738999,
"acc_norm": 0.7454690300736905,
"acc_norm_stderr": 0.004347070019527476
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5460526315789473,
"acc_stderr": 0.04051646342874142,
"acc_norm": 0.5460526315789473,
"acc_norm_stderr": 0.04051646342874142
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5735849056603773,
"acc_stderr": 0.030437794342983052,
"acc_norm": 0.5735849056603773,
"acc_norm_stderr": 0.030437794342983052
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6180555555555556,
"acc_stderr": 0.040629907841466674,
"acc_norm": 0.6180555555555556,
"acc_norm_stderr": 0.040629907841466674
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.630057803468208,
"acc_stderr": 0.036812296333943194,
"acc_norm": 0.630057803468208,
"acc_norm_stderr": 0.036812296333943194
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107224,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107224
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384739,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384739
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.502127659574468,
"acc_stderr": 0.03268572658667492,
"acc_norm": 0.502127659574468,
"acc_norm_stderr": 0.03268572658667492
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.35964912280701755,
"acc_stderr": 0.045144961328736334,
"acc_norm": 0.35964912280701755,
"acc_norm_stderr": 0.045144961328736334
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192118,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192118
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.025670080636909193,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.025670080636909193
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.36507936507936506,
"acc_stderr": 0.04306241259127153,
"acc_norm": 0.36507936507936506,
"acc_norm_stderr": 0.04306241259127153
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6870967741935484,
"acc_stderr": 0.026377567028645854,
"acc_norm": 0.6870967741935484,
"acc_norm_stderr": 0.026377567028645854
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4729064039408867,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.4729064039408867,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.696969696969697,
"acc_stderr": 0.03588624800091707,
"acc_norm": 0.696969696969697,
"acc_norm_stderr": 0.03588624800091707
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7424242424242424,
"acc_stderr": 0.03115626951964683,
"acc_norm": 0.7424242424242424,
"acc_norm_stderr": 0.03115626951964683
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7927461139896373,
"acc_stderr": 0.029252823291803624,
"acc_norm": 0.7927461139896373,
"acc_norm_stderr": 0.029252823291803624
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5717948717948718,
"acc_stderr": 0.025088301454694834,
"acc_norm": 0.5717948717948718,
"acc_norm_stderr": 0.025088301454694834
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34444444444444444,
"acc_stderr": 0.02897264888484427,
"acc_norm": 0.34444444444444444,
"acc_norm_stderr": 0.02897264888484427
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6386554621848739,
"acc_stderr": 0.03120469122515001,
"acc_norm": 0.6386554621848739,
"acc_norm_stderr": 0.03120469122515001
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8018348623853211,
"acc_stderr": 0.017090573804217905,
"acc_norm": 0.8018348623853211,
"acc_norm_stderr": 0.017090573804217905
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6813725490196079,
"acc_stderr": 0.03270287181482081,
"acc_norm": 0.6813725490196079,
"acc_norm_stderr": 0.03270287181482081
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7468354430379747,
"acc_stderr": 0.02830465794303529,
"acc_norm": 0.7468354430379747,
"acc_norm_stderr": 0.02830465794303529
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6098654708520179,
"acc_stderr": 0.03273766725459156,
"acc_norm": 0.6098654708520179,
"acc_norm_stderr": 0.03273766725459156
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6870229007633588,
"acc_stderr": 0.04066962905677697,
"acc_norm": 0.6870229007633588,
"acc_norm_stderr": 0.04066962905677697
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.04065578140908705,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.04065578140908705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.04330043749650742,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.04330043749650742
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7239263803680982,
"acc_stderr": 0.035123852837050475,
"acc_norm": 0.7239263803680982,
"acc_norm_stderr": 0.035123852837050475
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8376068376068376,
"acc_stderr": 0.02416161812798774,
"acc_norm": 0.8376068376068376,
"acc_norm_stderr": 0.02416161812798774
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.685823754789272,
"acc_stderr": 0.016599291735884904,
"acc_norm": 0.685823754789272,
"acc_norm_stderr": 0.016599291735884904
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.630057803468208,
"acc_stderr": 0.02599247202930639,
"acc_norm": 0.630057803468208,
"acc_norm_stderr": 0.02599247202930639
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2536312849162011,
"acc_stderr": 0.014551553659369922,
"acc_norm": 0.2536312849162011,
"acc_norm_stderr": 0.014551553659369922
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6339869281045751,
"acc_stderr": 0.027582811415159614,
"acc_norm": 0.6339869281045751,
"acc_norm_stderr": 0.027582811415159614
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6205787781350482,
"acc_stderr": 0.027559949802347817,
"acc_norm": 0.6205787781350482,
"acc_norm_stderr": 0.027559949802347817
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5987654320987654,
"acc_stderr": 0.0272725828498398,
"acc_norm": 0.5987654320987654,
"acc_norm_stderr": 0.0272725828498398
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4219858156028369,
"acc_stderr": 0.029462189233370604,
"acc_norm": 0.4219858156028369,
"acc_norm_stderr": 0.029462189233370604
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4165580182529335,
"acc_stderr": 0.012591153245057387,
"acc_norm": 0.4165580182529335,
"acc_norm_stderr": 0.012591153245057387
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.47794117647058826,
"acc_stderr": 0.030343264224213528,
"acc_norm": 0.47794117647058826,
"acc_norm_stderr": 0.030343264224213528
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5522875816993464,
"acc_stderr": 0.020116925347422425,
"acc_norm": 0.5522875816993464,
"acc_norm_stderr": 0.020116925347422425
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302505,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302505
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7020408163265306,
"acc_stderr": 0.02927956741106568,
"acc_norm": 0.7020408163265306,
"acc_norm_stderr": 0.02927956741106568
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7860696517412935,
"acc_stderr": 0.028996909693328923,
"acc_norm": 0.7860696517412935,
"acc_norm_stderr": 0.028996909693328923
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932264,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932264
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4759036144578313,
"acc_stderr": 0.038879718495972646,
"acc_norm": 0.4759036144578313,
"acc_norm_stderr": 0.038879718495972646
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7017543859649122,
"acc_stderr": 0.03508771929824563,
"acc_norm": 0.7017543859649122,
"acc_norm_stderr": 0.03508771929824563
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3292533659730722,
"mc1_stderr": 0.016451264440068235,
"mc2": 0.4834841796846446,
"mc2_stderr": 0.01517611623380428
},
"harness|winogrande|5": {
"acc": 0.749802683504341,
"acc_stderr": 0.012173009642449151
},
"harness|gsm8k|5": {
"acc": 0.5322213798332069,
"acc_stderr": 0.013743857303073781
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ostris/sdxl_10_reg | ---
license: mit
---
# Stable Diffusion XL 1.0 Regularization Images
Note: All of these images were generated without the refiner. These are sdxl 1.0 base only.
This is some of my SDXL 1.0 regularization images generated with various prompts that are useful for regularization images or other specialized training. (color augmentation, bluring, shapening, etc). I will attempt to add more as I go along with various categories.
Each image has a corrisponding txt file with the prompt used to generate it as well as Automatic1111 generation data in the exif data.
## Groups
The current groups are as follows:
### random_1024_1024
- 2,263 images at 1024 x 1024
- random topics and subjects, celeb faces included as well
- generation params:
- cfg: 10
- sampler: DPM++ 2M Karras
- steps: 30
|
datasciathlete/corpus4everyone-klue-small-korean-NER | ---
dataset_info:
features:
- name: ner_tags
sequence:
class_label:
names:
"0": B-PS,
"1": I-PS,
"2": B-OG,
"3": I-OG,
"4": B-LC,
"5": I-LC,
"6": O
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 128277756.0
num_examples: 102356
- name: validation
num_bytes: 32069439.0
num_examples: 25589
download_size: 17582758
dataset_size: 160347195.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Braddy/rsicd_matched_v1 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: captions
sequence: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 242716169.875
num_examples: 4433
download_size: 228425595
dataset_size: 242716169.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rsicd_matched_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Elvira0/test | ---
license: wtfpl
---
|
dinhanhx/coco-2017-vi | ---
language:
- vi
- en
pretty_name: COCO 2017 image captions in Vietnamese
source-datasets:
- ms coco
tags:
- coco
- coco-2017-vi
- Vietnamese
license: unknown
task_categories:
- image-to-text
task_ids:
- image-captioning
---
# COCO 2017 image captions in Vietnamese
The dataset is firstly introduced in [dinhanhx/VisualRoBERTa](https://github.com/dinhanhx/VisualRoBERTa/tree/main).
I use VinAI tools to translate [COCO 2027 image caption](https://cocodataset.org/#download) (2017 Train/Val annotations) from English to Vietnamese.
Then we merge [UIT-ViIC](https://arxiv.org/abs/2002.00175) dataset into it.
To load the dataset, one can take a look at [this code in VisualRoBERTa](https://github.com/dinhanhx/VisualRoBERTa/blob/main/src/data.py#L22-L100) or [this code in Velvet](https://github.com/dinhanhx/velvet/blob/main/velvet/dataset/coco.py).
I provide both English original and Vietnamese version (including UIT-ViIC).
⚠ Note:
- UIT-ViIC splits are originated from `en/captions_train2017.json`. Therefore, I combine all UIT-ViIC splits then I merge into `vi/captions_train2017_trans.json`. As a result, I get `captions_train2017_trans_plus.json`.
- `vi/captions_train2017_trans.json` and `vi/captions_val2017_trans.json` are VinAI-translated from the ones in `en/`. |
OlegKit/RND | ---
license: afl-3.0
---
|
hynky/czech-justice-summ-alpaca-long | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 26403302
num_examples: 4560
download_size: 12636847
dataset_size: 26403302
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "czech-justice-summ-alpaca-long"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ovior/twitter_dataset_1713031835 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2269271
num_examples: 6981
download_size: 1276790
dataset_size: 2269271
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/suzutsuki_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of suzutsuki/涼月/凉月 (Azur Lane)
This is the dataset of suzutsuki/涼月/凉月 (Azur Lane), containing 47 images and their tags.
The core tags of this character are `grey_hair, breasts, animal_ears, short_hair, bangs, earrings, facial_mark, hair_ornament, extra_ears, orange_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 47 | 60.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzutsuki_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 47 | 35.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzutsuki_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 115 | 74.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzutsuki_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 47 | 53.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzutsuki_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 115 | 105.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzutsuki_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/suzutsuki_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, bare_shoulders, full_body, kimono, long_sleeves, solo, thigh_strap, wide_sleeves, looking_at_viewer, off_shoulder, bare_legs, cleavage, holding, pelvic_curtain, platform_footwear, thighs, fur_collar, jewelry, open_mouth, white_background, :d, black_headwear, hat, medium_breasts, obi, panties, simple_background, standing, tassel, zouri, black_footwear, hair_bun, paddle, sitting, skindentation |
| 1 | 7 |  |  |  |  |  | 1girl, fur_trim, holding_fan, long_sleeves, solo, white_kimono, looking_at_viewer, simple_background, wide_sleeves, fur_collar, white_background, cleavage, folded_fan, open_mouth, red_skirt, sash, smile, upper_body, brown_eyes, closed_mouth, hamster, jewelry, medium_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | full_body | kimono | long_sleeves | solo | thigh_strap | wide_sleeves | looking_at_viewer | off_shoulder | bare_legs | cleavage | holding | pelvic_curtain | platform_footwear | thighs | fur_collar | jewelry | open_mouth | white_background | :d | black_headwear | hat | medium_breasts | obi | panties | simple_background | standing | tassel | zouri | black_footwear | hair_bun | paddle | sitting | skindentation | fur_trim | holding_fan | white_kimono | folded_fan | red_skirt | sash | smile | upper_body | brown_eyes | closed_mouth | hamster |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:------------|:---------|:---------------|:-------|:--------------|:---------------|:--------------------|:---------------|:------------|:-----------|:----------|:-----------------|:--------------------|:---------|:-------------|:----------|:-------------|:-------------------|:-----|:-----------------|:------|:-----------------|:------|:----------|:--------------------|:-----------|:---------|:--------|:-----------------|:-----------|:---------|:----------|:----------------|:-----------|:--------------|:---------------|:-------------|:------------|:-------|:--------|:-------------|:-------------|:---------------|:----------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | | | | X | X | | X | X | | | X | | | | | X | X | X | X | | | | X | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
Nexdata/chinese_dialect | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
task_categories:
- automatic-speech-recognition
language:
- zh
---
# Dataset Card for chinese_dialect
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nexdata.ai/?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset contains 25,000 hours of Chinese Dialect speech data. It's collected from local dialect speakers in multiple dialect regions, covering Hokkien, Cantonese, Sichuan Dialect, Henan Dialects,Northeastern Dialect, Shanghai Dialect,Uyghur and Tibetan etc. The format is 16kHz, 16bit, uncompressed wav, mono channel. The sentence accuracy is over 95%.
For more details, please refer to the link: https://nexdata.ai/speechRecognition?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Chinese Dialect
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commercial License
### Citation Information
[More Information Needed]
### Contributions |
CyberHarem/izmir_granbluefantasy | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of izmir (Granblue Fantasy)
This is the dataset of izmir (Granblue Fantasy), containing 198 images and their tags.
The core tags of this character are `horns, long_hair, breasts, pointy_ears, hair_ornament, hair_between_eyes, large_breasts, mole_under_mouth, mole, blue_hair, hair_flower, red_eyes, very_long_hair, ponytail, hat`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 198 | 273.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izmir_granbluefantasy/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 198 | 166.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izmir_granbluefantasy/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 472 | 348.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izmir_granbluefantasy/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 198 | 244.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izmir_granbluefantasy/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 472 | 474.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izmir_granbluefantasy/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/izmir_granbluefantasy',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 19 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, draph, solo, elbow_gloves, looking_at_viewer, white_dress, white_gloves, brown_eyes, fur_collar, white_headwear, smile, blush, rose, snowflake_hair_ornament |
| 1 | 11 |  |  |  |  |  | 1boy, 1girl, bare_shoulders, blush, draph, flower, solo_focus, cum_on_breasts, smile, snowflake_hair_ornament, collarbone, elbow_gloves, penis, white_gloves, bangs, dress, huge_breasts, open_mouth, paizuri_under_clothes, brown_eyes, eyes_visible_through_hair, looking_at_viewer, white_headwear, facial, heart, nipples |
| 2 | 7 |  |  |  |  |  | 1girl, bare_shoulders, blush, draph, flower, official_alternate_costume, smile, solo, looking_at_viewer, open_mouth, underboob_cutout, white_one-piece_swimsuit, collarbone, simple_background, white_background, navel, snowflake_hair_ornament, thigh_strap, thighs |
| 3 | 9 |  |  |  |  |  | 1girl, bare_shoulders, draph, official_alternate_costume, smile, solo, blush, flower, navel, swimsuit, looking_at_viewer, open_mouth, simple_background, snowflake_hair_ornament, underboob_cutout |
| 4 | 7 |  |  |  |  |  | 1girl, draph, looking_at_viewer, smile, solo, blush, braid, hair_bow, sitting |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | cleavage | draph | solo | elbow_gloves | looking_at_viewer | white_dress | white_gloves | brown_eyes | fur_collar | white_headwear | smile | blush | rose | snowflake_hair_ornament | 1boy | flower | solo_focus | cum_on_breasts | collarbone | penis | bangs | dress | huge_breasts | open_mouth | paizuri_under_clothes | eyes_visible_through_hair | facial | heart | nipples | official_alternate_costume | underboob_cutout | white_one-piece_swimsuit | simple_background | white_background | navel | thigh_strap | thighs | swimsuit | braid | hair_bow | sitting |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:-----------|:--------|:-------|:---------------|:--------------------|:--------------|:---------------|:-------------|:-------------|:-----------------|:--------|:--------|:-------|:--------------------------|:-------|:---------|:-------------|:-----------------|:-------------|:--------|:--------|:--------|:---------------|:-------------|:------------------------|:----------------------------|:---------|:--------|:----------|:-----------------------------|:-------------------|:---------------------------|:--------------------|:-------------------|:--------|:--------------|:---------|:-----------|:--------|:-----------|:----------|
| 0 | 19 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | X | | X | | X | X | | X | X | | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | | X | X | | X | | | | | | X | X | | X | | X | | | X | | | | | X | | | | | | X | X | X | X | X | X | X | X | | | | |
| 3 | 9 |  |  |  |  |  | X | X | | X | X | | X | | | | | | X | X | | X | | X | | | | | | | | X | | | | | | X | X | | X | | X | | | X | | | |
| 4 | 7 |  |  |  |  |  | X | | | X | X | | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X |
|
mnoukhov/summarize_from_feedback_tldr3_generated_20k_relabel_pythia1b_dpo_temp0.7 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: pred_chosen
dtype: float32
- name: pred_rejected
dtype: float32
splits:
- name: train
num_bytes: 36609147
num_examples: 19999
download_size: 22567847
dataset_size: 36609147
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NekoMirra/Miya | ---
license: afl-3.0
---
|
freshpearYoon/train_free_16 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 9604560280
num_examples: 10000
download_size: 1229136189
dataset_size: 9604560280
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
code_x_glue_tc_nl_code_search_adv | ---
annotations_creators:
- found
language_creators:
- found
language:
- code
- en
license:
- c-uda
multilinguality:
- other-programming-languages
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- document-retrieval
pretty_name: CodeXGlueTcNlCodeSearchAdv
dataset_info:
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
- name: docstring_summary
dtype: string
- name: parameters
dtype: string
- name: return_statement
dtype: string
- name: argument_list
dtype: string
- name: identifier
dtype: string
- name: nwo
dtype: string
- name: score
dtype: float32
splits:
- name: train
num_bytes: 820714108
num_examples: 251820
- name: validation
num_bytes: 23468758
num_examples: 9604
- name: test
num_bytes: 47433608
num_examples: 19210
download_size: 316235421
dataset_size: 891616474
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "code_x_glue_tc_nl_code_search_adv"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/NL-code-search-Adv
- **Paper:** https://arxiv.org/abs/2102.04664
### Dataset Summary
CodeXGLUE NL-code-search-Adv dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/NL-code-search-Adv
The dataset we use comes from CodeSearchNet and we filter the dataset as the following:
- Remove examples that codes cannot be parsed into an abstract syntax tree.
- Remove examples that #tokens of documents is < 3 or >256
- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
- Remove examples that documents are not English.
### Supported Tasks and Leaderboards
- `document-retrieval`: The dataset can be used to train a model for retrieving top-k codes from a given **English** natural language query.
### Languages
- Python **programming** language
- English **natural** language
## Dataset Structure
### Data Instances
An example of 'validation' looks as follows.
```
{
"argument_list": "",
"code": "def Func(arg_0, arg_1='.', arg_2=True, arg_3=False, **arg_4):\n \"\"\"Downloads Dailymotion videos by URL.\n \"\"\"\n\n arg_5 = get_content(rebuilt_url(arg_0))\n arg_6 = json.loads(match1(arg_5, r'qualities\":({.+?}),\"'))\n arg_7 = match1(arg_5, r'\"video_title\"\\s*:\\s*\"([^\"]+)\"') or \\\n match1(arg_5, r'\"title\"\\s*:\\s*\"([^\"]+)\"')\n arg_7 = unicodize(arg_7)\n\n for arg_8 in ['1080','720','480','380','240','144','auto']:\n try:\n arg_9 = arg_6[arg_8][1][\"url\"]\n if arg_9:\n break\n except KeyError:\n pass\n\n arg_10, arg_11, arg_12 = url_info(arg_9)\n\n print_info(site_info, arg_7, arg_10, arg_12)\n if not arg_3:\n download_urls([arg_9], arg_7, arg_11, arg_12, arg_1=arg_1, arg_2=arg_2)",
"code_tokens": ["def", "Func", "(", "arg_0", ",", "arg_1", "=", "'.'", ",", "arg_2", "=", "True", ",", "arg_3", "=", "False", ",", "**", "arg_4", ")", ":", "arg_5", "=", "get_content", "(", "rebuilt_url", "(", "arg_0", ")", ")", "arg_6", "=", "json", ".", "loads", "(", "match1", "(", "arg_5", ",", "r'qualities\":({.+?}),\"'", ")", ")", "arg_7", "=", "match1", "(", "arg_5", ",", "r'\"video_title\"\\s*:\\s*\"([^\"]+)\"'", ")", "or", "match1", "(", "arg_5", ",", "r'\"title\"\\s*:\\s*\"([^\"]+)\"'", ")", "arg_7", "=", "unicodize", "(", "arg_7", ")", "for", "arg_8", "in", "[", "'1080'", ",", "'720'", ",", "'480'", ",", "'380'", ",", "'240'", ",", "'144'", ",", "'auto'", "]", ":", "try", ":", "arg_9", "=", "arg_6", "[", "arg_8", "]", "[", "1", "]", "[", "\"url\"", "]", "if", "arg_9", ":", "break", "except", "KeyError", ":", "pass", "arg_10", ",", "arg_11", ",", "arg_12", "=", "url_info", "(", "arg_9", ")", "print_info", "(", "site_info", ",", "arg_7", ",", "arg_10", ",", "arg_12", ")", "if", "not", "arg_3", ":", "download_urls", "(", "[", "arg_9", "]", ",", "arg_7", ",", "arg_11", ",", "arg_12", ",", "arg_1", "=", "arg_1", ",", "arg_2", "=", "arg_2", ")"],
"docstring": "Downloads Dailymotion videos by URL.",
"docstring_summary": "Downloads Dailymotion videos by URL.",
"docstring_tokens": ["Downloads", "Dailymotion", "videos", "by", "URL", "."],
"func_name": "",
"id": 0,
"identifier": "dailymotion_download",
"language": "python",
"nwo": "soimort/you-get",
"original_string": "",
"parameters": "(url, output_dir='.', merge=True, info_only=False, **kwargs)",
"path": "src/you_get/extractors/dailymotion.py",
"repo": "",
"return_statement": "",
"score": 0.9997601509094238,
"sha": "b746ac01c9f39de94cac2d56f665285b0523b974",
"url": "https://github.com/soimort/you-get/blob/b746ac01c9f39de94cac2d56f665285b0523b974/src/you_get/extractors/dailymotion.py#L13-L35"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### default
| field name | type | description |
|-----------------|-----------------------|-----------------------------------------------------------------------------------|
|id |int32 | Index of the sample |
|repo |string | repo: the owner/repo |
|path |string | path: the full path to the original file |
|func_name |string | func_name: the function or method name |
|original_string |string | original_string: the raw string before tokenization or parsing |
|language |string | language: the programming language |
|code |string | code/function: the part of the original_string that is code |
|code_tokens |Sequence[string] | code_tokens/function_tokens: tokenized version of code |
|docstring |string | docstring: the top-level comment or docstring, if it exists in the original string|
|docstring_tokens |Sequence[string] | docstring_tokens: tokenized version of docstring |
|sha |string | sha of the file |
|url |string | url of the file |
|docstring_summary|string | Summary of the docstring |
|parameters |string | parameters of the function |
|return_statement |string | return statement |
|argument_list |string | list of arguments of the function |
|identifier |string | identifier |
|nwo |string | nwo |
|score |datasets.Value("float"]| score for this search |
### Data Splits
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|251820| 9604|19210|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Data from CodeSearchNet Challenge dataset.
[More Information Needed]
#### Who are the source language producers?
Software Engineering developers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@article{DBLP:journals/corr/abs-2102-04664,
author = {Shuai Lu and
Daya Guo and
Shuo Ren and
Junjie Huang and
Alexey Svyatkovskiy and
Ambrosio Blanco and
Colin B. Clement and
Dawn Drain and
Daxin Jiang and
Duyu Tang and
Ge Li and
Lidong Zhou and
Linjun Shou and
Long Zhou and
Michele Tufano and
Ming Gong and
Ming Zhou and
Nan Duan and
Neel Sundaresan and
Shao Kun Deng and
Shengyu Fu and
Shujie Liu},
title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding
and Generation},
journal = {CoRR},
volume = {abs/2102.04664},
year = {2021}
}
@article{husain2019codesearchnet,
title={Codesearchnet challenge: Evaluating the state of semantic code search},
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
journal={arXiv preprint arXiv:1909.09436},
year={2019}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
newsmediabias/FAKE-NEWS-BIASES-LABELLED | ---
license: openrail
---
|
CyberHarem/desert_eagle_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of desert_eagle/デザートイーグル/沙漠之鹰 (Girls' Frontline)
This is the dataset of desert_eagle/デザートイーグル/沙漠之鹰 (Girls' Frontline), containing 27 images and their tags.
The core tags of this character are `long_hair, bangs, very_long_hair, yellow_eyes, breasts, two_side_up, blonde_hair, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 27 | 49.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/desert_eagle_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 27 | 22.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/desert_eagle_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 71 | 54.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/desert_eagle_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 27 | 40.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/desert_eagle_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 71 | 80.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/desert_eagle_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/desert_eagle_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 27 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, frills, white_thighhighs, black_jacket, long_sleeves, closed_mouth, holding_gun, open_mouth, white_background, black_skirt, garter_straps, full_body, handgun, navel, open_jacket |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | blush | frills | white_thighhighs | black_jacket | long_sleeves | closed_mouth | holding_gun | open_mouth | white_background | black_skirt | garter_straps | full_body | handgun | navel | open_jacket |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:--------|:---------|:-------------------|:---------------|:---------------|:---------------|:--------------|:-------------|:-------------------|:--------------|:----------------|:------------|:----------|:--------|:--------------|
| 0 | 27 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/nemone_granbluefantasy | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nemone (Granblue Fantasy)
This is the dataset of nemone (Granblue Fantasy), containing 111 images and their tags.
The core tags of this character are `animal_ears, dark_skin, dark-skinned_female, hair_ornament, long_hair, breasts, hair_flower, green_eyes, grey_hair, cat_ears, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 111 | 122.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nemone_granbluefantasy/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 111 | 81.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nemone_granbluefantasy/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 250 | 164.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nemone_granbluefantasy/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 111 | 114.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nemone_granbluefantasy/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 250 | 219.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nemone_granbluefantasy/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nemone_granbluefantasy',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 46 |  |  |  |  |  | 1girl, erune, solo, flower, looking_at_viewer, smile, black_gloves, open_mouth, blush, sideboob, simple_background, white_background, bare_shoulders |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | erune | solo | flower | looking_at_viewer | smile | black_gloves | open_mouth | blush | sideboob | simple_background | white_background | bare_shoulders |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:---------|:--------------------|:--------|:---------------|:-------------|:--------|:-----------|:--------------------|:-------------------|:-----------------|
| 0 | 46 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
hieunguyen1053/binhvq-news-corpus-3 | ---
dataset_info:
features:
- name: text
dtype: string
- name: idx
dtype: int64
splits:
- name: train
num_bytes: 18273361123.47867
num_examples: 88806991
download_size: 10739913855
dataset_size: 18273361123.47867
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdapterOcean/python3-standardized_unified | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 292725703
num_examples: 108622
download_size: 0
dataset_size: 292725703
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python3-standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zhanye/zhanye | ---
license: unknown
---
|
joey234/mmlu-management-neg-prepend-verbal | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
- name: neg_prompt
dtype: string
- name: fewshot_context_neg
dtype: string
- name: fewshot_context_ori
dtype: string
splits:
- name: dev
num_bytes: 5661
num_examples: 5
- name: test
num_bytes: 585237
num_examples: 103
download_size: 102789
dataset_size: 590898
---
# Dataset Card for "mmlu-management-neg-prepend-verbal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alexchen4ai/rerank_samples | ---
dataset_info:
features:
- name: prompt1
dtype: string
- name: prompt2
dtype: string
- name: prompt3
dtype: string
- name: rank_diff
dtype: int64
splits:
- name: train
num_bytes: 62000
num_examples: 1000
download_size: 2339
dataset_size: 62000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
thesistranslation/distilled-ccmatrix-en-de | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- de
splits:
- name: train
num_bytes: 7294036621
num_examples: 30000000
download_size: 5135500985
dataset_size: 7294036621
language:
- en
- de
---
# Dataset Card for "distilled-ccmatrix-en-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
euclaise/mqa | ---
size_categories:
- 10K<n<100K
task_categories:
- question-answering
pretty_name: MultiQA
dataset_info:
features:
- name: msg
dtype: string
- name: resp_correct
dtype: string
- name: resp_incorrect
sequence: string
splits:
- name: train
num_bytes: 20624051.02310231
num_examples: 23408
download_size: 18672769
dataset_size: 20624051.02310231
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# MQA
Aggregation of datasets as per [here](https://huggingface.co/collections/euclaise/mqa-650f41afae507a2c7ca18b55)
I reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was. |
ClimatePolicyRadar/national-climate-targets | ---
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
- name: annotation_agent
dtype: int64
- name: geography
dtype: string
- name: region
dtype: string
- name: translated
dtype: bool
- name: annotation_NZT
dtype: int64
- name: annotation_Reduction
dtype: int64
- name: annotation_Other
dtype: int64
splits:
- name: train
num_bytes: 2912069
num_examples: 2610
download_size: 1522649
dataset_size: 2912069
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# National Climate Targets Training Dataset – Climate Policy Radar
A dataset of climate targets made by national governments in their laws, policies and UNFCCC submissions which has been used to train a classifier. Text was sourced from the [Climate Policy Radar database](https://app.climatepolicyradar.org).
We define a target as an aim to achieve a specific outcome, that is quantifiable and is given a deadline.
This dataset distinguishes between different types of targets:
- **Reduction** (a.k.a. emissions reduction): a target referring to a reduction in greenhouse gas emissions, either economy-wide or for a sector.
- **Net zero**: a commitment to balance GHG emissions with removal, effectively reducing the net emissions to zero.
- **Other**: those that do not fit into the Reduction or Net Zero category but satisfy our definition of a target, e.g. renewable energy targets.
*IMPORTANT NOTE:* this dataset has been used to train a machine learning model, and **is not a list of all climate targets published by national governments**.
For more information on dataset creation, [see our paper](https://arxiv.org/abs/2404.02822).
## Dataset Description
This dataset includes 2,610 text passages containing 1,193 target mentions annotated in a multilabel setting: one text passage can be assigned to 0 or more target types. This breaks down as follows.
| | Number of passages |
|:--------------|--------:|
| NZT | 203 |
| Reduction | 359 |
| Other | 631 |
| No Annotation | 1,584 |
It was annotated by 3 domain-experts with steps taken to ensure consistency by measuring inter-annotator agreement. Annotator `2` is a data scientist, with a combination of sampling negatives and errors caught during posthoc reviews.
All text is in English: the `translated` column describes whether it has been translated from another language using the Google Cloud Translation API. Further to the text and annotations, we also include characteristics of the documents we use to make equity calculations and anonymised assignment of annotations to annotators.
For more information on the dataset and its creation see **our paper TBA**.
## License
Our dataset is licensed as [CC by 4.0](https://creativecommons.org/licenses/by/4.0/).
Please read our [Terms of Use](https://app.climatepolicyradar.org/terms-of-use), including any specific terms relevant to commercial use. Contact partners@climatepolicyradar.org with any questions.
## Links
- [Paper](https://arxiv.org/abs/2404.02822)
## Citation
*Juhasz, M., Marchand, T., Melwani, R., Dutia, K., Goodenough, S., Pim, H., & Franks, H. (2024). Identifying Climate Targets in National Laws and Policies using Machine Learning. arXiv preprint arXiv:2404.02822.*
```
@misc{juhasz2024identifying,
title={Identifying Climate Targets in National Laws and Policies using Machine Learning},
author={Matyas Juhasz and Tina Marchand and Roshan Melwani and Kalyan Dutia and Sarah Goodenough and Harrison Pim and Henry Franks},
year={2024},
eprint={2404.02822},
archivePrefix={arXiv},
primaryClass={cs.CY}
}
```
## Authors & Contact
Climate Policy Radar team: Matyas Juhasz, Tina Marchand, Roshan Melwani, Kalyan Dutia, Sarah Goodenough, Harrison Pim, and Henry Franks.
https://climatepolicyradar.org |
minwoosun/CholecSeg8k | ---
license: cc-by-nc-sa-4.0
task_categories:
- image-segmentation
language:
- en
tags:
- medical
- biology
pretty_name: CholecSeg8k
size_categories:
- 1K<n<10K
---
# Description:
[paper](https://arxiv.org/abs/2012.12453) | [kaggle](https://www.kaggle.com/datasets/newslab/cholecseg8k)
The CholecSeg8k dataset, an extension of the Cholec80 collection, includes 8,080 carefully annotated images from laparoscopic cholecystectomy surgeries, selected from 17 video clips in Cholec80. Each image in CholecSeg8K is pixel-level annotated for thirteen different surgical elements. The dataset is efficiently organized in a directory structure, featuring 101 folders, each containing 80 frames at a resolution of 854x480, along with three types of masks for each frame: a color mask for visualization, an annotation tool mask, and a watershed mask for simplified processing. This comprehensive dataset, freely available under the CC BY-NC-SA 4.0 license, is a critical resource for advancing the field of computer-assisted surgical procedures.
# Loading the data:
First install the `datasets` library, then run the following code,
```python
from datasets import load_dataset
dataset = load_dataset("minwoosun/CholecSeg8k", trust_remote_code=True)
```
# Simple demo:
This short demo shows how to load the data and directly visualize an image along with the corresponding masks.
```python
from datasets import load_dataset
import matplotlib.pyplot as plt
dataset = load_dataset("minwoosun/CholecSeg8k", trust_remote_code=True)
def display_image(dataset, image_index):
'''Display the image and corresponding three masks.'''
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
for ax in axs.flat:
ax.axis('off')
# Display each image in its respective subplot
axs[0, 0].imshow(dataset['train'][image_index]['image'])
axs[0, 1].imshow(dataset['train'][image_index]['color_mask'])
axs[1, 0].imshow(dataset['train'][image_index]['watershed_mask'])
axs[1, 1].imshow(dataset['train'][image_index]['annotation_mask'])
# Adjust spacing between images
plt.subplots_adjust(wspace=0.01, hspace=-0.6)
plt.show()
display_image(dataset, 800) # video index from 0 to 8079
```

# Citation (BibTex):
```
@misc{hong2020cholecseg8k,
title={CholecSeg8k: A Semantic Segmentation Dataset for Laparoscopic Cholecystectomy Based on Cholec80},
author={W. -Y. Hong and C. -L. Kao and Y. -H. Kuo and J. -R. Wang and W. -L. Chang and C. -S. Shih},
year={2020},
eprint={2012.12453},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
# Data card contact:
Min Woo Sun (minwoos@stanford.edu)
|
Allakhazam/chunli | ---
license: cc-by-nc-4.0
---
|
irds/wapo_v3_trec-news-2020 | ---
pretty_name: '`wapo/v3/trec-news-2020`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `wapo/v3/trec-news-2020`
The `wapo/v3/trec-news-2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v3/trec-news-2020).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=17,764
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/wapo_v3_trec-news-2020', 'queries')
for record in queries:
record # {'query_id': ..., 'doc_id': ..., 'url': ...}
qrels = load_dataset('irds/wapo_v3_trec-news-2020', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
Cohere/miracl-fi-queries-22-12 | ---
annotations_creators:
- expert-generated
language:
- fi
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (fi) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fi-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16 | ---
pretty_name: Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16](https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-25T11:55:30.067408](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16/blob/main/results_2023-10-25T11-55-30.067408.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0016778523489932886,\n\
\ \"em_stderr\": 0.0004191330178826867,\n \"f1\": 0.06479446308724818,\n\
\ \"f1_stderr\": 0.0014092381006987735,\n \"acc\": 0.4316480101102639,\n\
\ \"acc_stderr\": 0.010106933446946506\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.0004191330178826867,\n\
\ \"f1\": 0.06479446308724818,\n \"f1_stderr\": 0.0014092381006987735\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10007581501137225,\n \
\ \"acc_stderr\": 0.00826627452868562\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7632202052091555,\n \"acc_stderr\": 0.01194759236520739\n\
\ }\n}\n```"
repo_url: https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|arc:challenge|25_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_25T11_55_30.067408
path:
- '**/details_harness|drop|3_2023-10-25T11-55-30.067408.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-25T11-55-30.067408.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_25T11_55_30.067408
path:
- '**/details_harness|gsm8k|5_2023-10-25T11-55-30.067408.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-25T11-55-30.067408.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hellaswag|10_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-11T17-50-57.787560.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-11T17-50-57.787560.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-11T17-50-57.787560.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_25T11_55_30.067408
path:
- '**/details_harness|winogrande|5_2023-10-25T11-55-30.067408.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-25T11-55-30.067408.parquet'
- config_name: results
data_files:
- split: 2023_09_11T17_50_57.787560
path:
- results_2023-09-11T17-50-57.787560.parquet
- split: 2023_10_25T11_55_30.067408
path:
- results_2023-10-25T11-55-30.067408.parquet
- split: latest
path:
- results_2023-10-25T11-55-30.067408.parquet
---
# Dataset Card for Evaluation run of dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16](https://huggingface.co/dhmeltzer/Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T11:55:30.067408](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-13b-hf-ds_wiki_1024_full_r_64_alpha_16/blob/main/results_2023-10-25T11-55-30.067408.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0016778523489932886,
"em_stderr": 0.0004191330178826867,
"f1": 0.06479446308724818,
"f1_stderr": 0.0014092381006987735,
"acc": 0.4316480101102639,
"acc_stderr": 0.010106933446946506
},
"harness|drop|3": {
"em": 0.0016778523489932886,
"em_stderr": 0.0004191330178826867,
"f1": 0.06479446308724818,
"f1_stderr": 0.0014092381006987735
},
"harness|gsm8k|5": {
"acc": 0.10007581501137225,
"acc_stderr": 0.00826627452868562
},
"harness|winogrande|5": {
"acc": 0.7632202052091555,
"acc_stderr": 0.01194759236520739
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
open-llm-leaderboard/details_TeeZee__Buttocks-7B-v1.0 | ---
pretty_name: Evaluation run of TeeZee/Buttocks-7B-v1.0
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TeeZee/Buttocks-7B-v1.0](https://huggingface.co/TeeZee/Buttocks-7B-v1.0) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TeeZee__Buttocks-7B-v1.0\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-25T05:45:51.088274](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__Buttocks-7B-v1.0/blob/main/results_2024-01-25T05-45-51.088274.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.49958748050002727,\n\
\ \"acc_stderr\": 0.03449947558483939,\n \"acc_norm\": 0.5072913093747228,\n\
\ \"acc_norm_stderr\": 0.03532795103647748,\n \"mc1\": 0.2962056303549572,\n\
\ \"mc1_stderr\": 0.015983595101811392,\n \"mc2\": 0.4472415883922134,\n\
\ \"mc2_stderr\": 0.015128282783775687\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5213310580204779,\n \"acc_stderr\": 0.014598087973127106,\n\
\ \"acc_norm\": 0.5460750853242321,\n \"acc_norm_stderr\": 0.01454922110517187\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.578868751244772,\n\
\ \"acc_stderr\": 0.004927314729433553,\n \"acc_norm\": 0.7561242780322645,\n\
\ \"acc_norm_stderr\": 0.004285410130466104\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.48148148148148145,\n\
\ \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.48148148148148145,\n\
\ \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.4342105263157895,\n \"acc_stderr\": 0.040335656678483184,\n\
\ \"acc_norm\": 0.4342105263157895,\n \"acc_norm_stderr\": 0.040335656678483184\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.44,\n\
\ \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5358490566037736,\n \"acc_stderr\": 0.030693675018458003,\n\
\ \"acc_norm\": 0.5358490566037736,\n \"acc_norm_stderr\": 0.030693675018458003\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5347222222222222,\n\
\ \"acc_stderr\": 0.04171115858181618,\n \"acc_norm\": 0.5347222222222222,\n\
\ \"acc_norm_stderr\": 0.04171115858181618\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n\
\ \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4508670520231214,\n\
\ \"acc_stderr\": 0.03794012674697029,\n \"acc_norm\": 0.4508670520231214,\n\
\ \"acc_norm_stderr\": 0.03794012674697029\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.04724007352383888,\n\
\ \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.04724007352383888\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.63,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n\
\ \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4595744680851064,\n \"acc_stderr\": 0.03257901482099835,\n\
\ \"acc_norm\": 0.4595744680851064,\n \"acc_norm_stderr\": 0.03257901482099835\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.35964912280701755,\n\
\ \"acc_stderr\": 0.04514496132873633,\n \"acc_norm\": 0.35964912280701755,\n\
\ \"acc_norm_stderr\": 0.04514496132873633\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.45517241379310347,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.45517241379310347,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.36243386243386244,\n \"acc_stderr\": 0.024757473902752035,\n \"\
acc_norm\": 0.36243386243386244,\n \"acc_norm_stderr\": 0.024757473902752035\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2698412698412698,\n\
\ \"acc_stderr\": 0.03970158273235172,\n \"acc_norm\": 0.2698412698412698,\n\
\ \"acc_norm_stderr\": 0.03970158273235172\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.6064516129032258,\n \"acc_stderr\": 0.027791878753132267,\n \"\
acc_norm\": 0.6064516129032258,\n \"acc_norm_stderr\": 0.027791878753132267\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.39408866995073893,\n \"acc_stderr\": 0.034381579670365446,\n \"\
acc_norm\": 0.39408866995073893,\n \"acc_norm_stderr\": 0.034381579670365446\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\"\
: 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6303030303030303,\n \"acc_stderr\": 0.03769430314512568,\n\
\ \"acc_norm\": 0.6303030303030303,\n \"acc_norm_stderr\": 0.03769430314512568\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6565656565656566,\n \"acc_stderr\": 0.03383201223244441,\n \"\
acc_norm\": 0.6565656565656566,\n \"acc_norm_stderr\": 0.03383201223244441\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.689119170984456,\n \"acc_stderr\": 0.03340361906276586,\n\
\ \"acc_norm\": 0.689119170984456,\n \"acc_norm_stderr\": 0.03340361906276586\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.4846153846153846,\n \"acc_stderr\": 0.025339003010106522,\n\
\ \"acc_norm\": 0.4846153846153846,\n \"acc_norm_stderr\": 0.025339003010106522\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.28888888888888886,\n \"acc_stderr\": 0.027634907264178544,\n \
\ \"acc_norm\": 0.28888888888888886,\n \"acc_norm_stderr\": 0.027634907264178544\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5378151260504201,\n \"acc_stderr\": 0.03238546948758979,\n \
\ \"acc_norm\": 0.5378151260504201,\n \"acc_norm_stderr\": 0.03238546948758979\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"\
acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.6990825688073394,\n \"acc_stderr\": 0.019664751366802114,\n \"\
acc_norm\": 0.6990825688073394,\n \"acc_norm_stderr\": 0.019664751366802114\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4166666666666667,\n \"acc_stderr\": 0.03362277436608043,\n \"\
acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.03362277436608043\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6127450980392157,\n \"acc_stderr\": 0.03418931233833342,\n \"\
acc_norm\": 0.6127450980392157,\n \"acc_norm_stderr\": 0.03418931233833342\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.6413502109704642,\n \"acc_stderr\": 0.031219569445301833,\n \
\ \"acc_norm\": 0.6413502109704642,\n \"acc_norm_stderr\": 0.031219569445301833\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5605381165919282,\n\
\ \"acc_stderr\": 0.03331092511038179,\n \"acc_norm\": 0.5605381165919282,\n\
\ \"acc_norm_stderr\": 0.03331092511038179\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.5801526717557252,\n \"acc_stderr\": 0.04328577215262971,\n\
\ \"acc_norm\": 0.5801526717557252,\n \"acc_norm_stderr\": 0.04328577215262971\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6528925619834711,\n \"acc_stderr\": 0.04345724570292535,\n \"\
acc_norm\": 0.6528925619834711,\n \"acc_norm_stderr\": 0.04345724570292535\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5648148148148148,\n\
\ \"acc_stderr\": 0.04792898170907061,\n \"acc_norm\": 0.5648148148148148,\n\
\ \"acc_norm_stderr\": 0.04792898170907061\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.5828220858895705,\n \"acc_stderr\": 0.03874102859818081,\n\
\ \"acc_norm\": 0.5828220858895705,\n \"acc_norm_stderr\": 0.03874102859818081\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n\
\ \"acc_stderr\": 0.04653333146973647,\n \"acc_norm\": 0.4017857142857143,\n\
\ \"acc_norm_stderr\": 0.04653333146973647\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6796116504854369,\n \"acc_stderr\": 0.04620284082280042,\n\
\ \"acc_norm\": 0.6796116504854369,\n \"acc_norm_stderr\": 0.04620284082280042\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7649572649572649,\n\
\ \"acc_stderr\": 0.027778835904935437,\n \"acc_norm\": 0.7649572649572649,\n\
\ \"acc_norm_stderr\": 0.027778835904935437\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6411238825031929,\n\
\ \"acc_stderr\": 0.017152991797501342,\n \"acc_norm\": 0.6411238825031929,\n\
\ \"acc_norm_stderr\": 0.017152991797501342\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.026919095102908273,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.026919095102908273\n \
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23687150837988827,\n\
\ \"acc_stderr\": 0.014219570788103986,\n \"acc_norm\": 0.23687150837988827,\n\
\ \"acc_norm_stderr\": 0.014219570788103986\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5032679738562091,\n \"acc_stderr\": 0.028629305194003543,\n\
\ \"acc_norm\": 0.5032679738562091,\n \"acc_norm_stderr\": 0.028629305194003543\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5852090032154341,\n\
\ \"acc_stderr\": 0.027982680459759567,\n \"acc_norm\": 0.5852090032154341,\n\
\ \"acc_norm_stderr\": 0.027982680459759567\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5123456790123457,\n \"acc_stderr\": 0.027812262269327242,\n\
\ \"acc_norm\": 0.5123456790123457,\n \"acc_norm_stderr\": 0.027812262269327242\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.40070921985815605,\n \"acc_stderr\": 0.029233465745573086,\n \
\ \"acc_norm\": 0.40070921985815605,\n \"acc_norm_stderr\": 0.029233465745573086\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3852672750977836,\n\
\ \"acc_stderr\": 0.012429485434955182,\n \"acc_norm\": 0.3852672750977836,\n\
\ \"acc_norm_stderr\": 0.012429485434955182\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.47058823529411764,\n \"acc_stderr\": 0.030320243265004137,\n\
\ \"acc_norm\": 0.47058823529411764,\n \"acc_norm_stderr\": 0.030320243265004137\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.4820261437908497,\n \"acc_stderr\": 0.020214761037872408,\n \
\ \"acc_norm\": 0.4820261437908497,\n \"acc_norm_stderr\": 0.020214761037872408\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n\
\ \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n\
\ \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5346938775510204,\n \"acc_stderr\": 0.03193207024425314,\n\
\ \"acc_norm\": 0.5346938775510204,\n \"acc_norm_stderr\": 0.03193207024425314\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6318407960199005,\n\
\ \"acc_stderr\": 0.03410410565495302,\n \"acc_norm\": 0.6318407960199005,\n\
\ \"acc_norm_stderr\": 0.03410410565495302\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4578313253012048,\n\
\ \"acc_stderr\": 0.038786267710023595,\n \"acc_norm\": 0.4578313253012048,\n\
\ \"acc_norm_stderr\": 0.038786267710023595\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6374269005847953,\n \"acc_stderr\": 0.036871306155620606,\n\
\ \"acc_norm\": 0.6374269005847953,\n \"acc_norm_stderr\": 0.036871306155620606\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2962056303549572,\n\
\ \"mc1_stderr\": 0.015983595101811392,\n \"mc2\": 0.4472415883922134,\n\
\ \"mc2_stderr\": 0.015128282783775687\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6890292028413575,\n \"acc_stderr\": 0.013009534736286058\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0576194086429113,\n \
\ \"acc_stderr\": 0.006418593319822861\n }\n}\n```"
repo_url: https://huggingface.co/TeeZee/Buttocks-7B-v1.0
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|arc:challenge|25_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|gsm8k|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hellaswag|10_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T05-45-51.088274.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-25T05-45-51.088274.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- '**/details_harness|winogrande|5_2024-01-25T05-45-51.088274.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-25T05-45-51.088274.parquet'
- config_name: results
data_files:
- split: 2024_01_25T05_45_51.088274
path:
- results_2024-01-25T05-45-51.088274.parquet
- split: latest
path:
- results_2024-01-25T05-45-51.088274.parquet
---
# Dataset Card for Evaluation run of TeeZee/Buttocks-7B-v1.0
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [TeeZee/Buttocks-7B-v1.0](https://huggingface.co/TeeZee/Buttocks-7B-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TeeZee__Buttocks-7B-v1.0",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-25T05:45:51.088274](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__Buttocks-7B-v1.0/blob/main/results_2024-01-25T05-45-51.088274.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.49958748050002727,
"acc_stderr": 0.03449947558483939,
"acc_norm": 0.5072913093747228,
"acc_norm_stderr": 0.03532795103647748,
"mc1": 0.2962056303549572,
"mc1_stderr": 0.015983595101811392,
"mc2": 0.4472415883922134,
"mc2_stderr": 0.015128282783775687
},
"harness|arc:challenge|25": {
"acc": 0.5213310580204779,
"acc_stderr": 0.014598087973127106,
"acc_norm": 0.5460750853242321,
"acc_norm_stderr": 0.01454922110517187
},
"harness|hellaswag|10": {
"acc": 0.578868751244772,
"acc_stderr": 0.004927314729433553,
"acc_norm": 0.7561242780322645,
"acc_norm_stderr": 0.004285410130466104
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.043163785995113245,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.043163785995113245
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4342105263157895,
"acc_stderr": 0.040335656678483184,
"acc_norm": 0.4342105263157895,
"acc_norm_stderr": 0.040335656678483184
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5358490566037736,
"acc_stderr": 0.030693675018458003,
"acc_norm": 0.5358490566037736,
"acc_norm_stderr": 0.030693675018458003
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5347222222222222,
"acc_stderr": 0.04171115858181618,
"acc_norm": 0.5347222222222222,
"acc_norm_stderr": 0.04171115858181618
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4508670520231214,
"acc_stderr": 0.03794012674697029,
"acc_norm": 0.4508670520231214,
"acc_norm_stderr": 0.03794012674697029
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.04724007352383888,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.04724007352383888
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4595744680851064,
"acc_stderr": 0.03257901482099835,
"acc_norm": 0.4595744680851064,
"acc_norm_stderr": 0.03257901482099835
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.35964912280701755,
"acc_stderr": 0.04514496132873633,
"acc_norm": 0.35964912280701755,
"acc_norm_stderr": 0.04514496132873633
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.45517241379310347,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.45517241379310347,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.36243386243386244,
"acc_stderr": 0.024757473902752035,
"acc_norm": 0.36243386243386244,
"acc_norm_stderr": 0.024757473902752035
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2698412698412698,
"acc_stderr": 0.03970158273235172,
"acc_norm": 0.2698412698412698,
"acc_norm_stderr": 0.03970158273235172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6064516129032258,
"acc_stderr": 0.027791878753132267,
"acc_norm": 0.6064516129032258,
"acc_norm_stderr": 0.027791878753132267
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.39408866995073893,
"acc_stderr": 0.034381579670365446,
"acc_norm": 0.39408866995073893,
"acc_norm_stderr": 0.034381579670365446
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6303030303030303,
"acc_stderr": 0.03769430314512568,
"acc_norm": 0.6303030303030303,
"acc_norm_stderr": 0.03769430314512568
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6565656565656566,
"acc_stderr": 0.03383201223244441,
"acc_norm": 0.6565656565656566,
"acc_norm_stderr": 0.03383201223244441
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.689119170984456,
"acc_stderr": 0.03340361906276586,
"acc_norm": 0.689119170984456,
"acc_norm_stderr": 0.03340361906276586
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4846153846153846,
"acc_stderr": 0.025339003010106522,
"acc_norm": 0.4846153846153846,
"acc_norm_stderr": 0.025339003010106522
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.28888888888888886,
"acc_stderr": 0.027634907264178544,
"acc_norm": 0.28888888888888886,
"acc_norm_stderr": 0.027634907264178544
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5378151260504201,
"acc_stderr": 0.03238546948758979,
"acc_norm": 0.5378151260504201,
"acc_norm_stderr": 0.03238546948758979
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.6990825688073394,
"acc_stderr": 0.019664751366802114,
"acc_norm": 0.6990825688073394,
"acc_norm_stderr": 0.019664751366802114
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4166666666666667,
"acc_stderr": 0.03362277436608043,
"acc_norm": 0.4166666666666667,
"acc_norm_stderr": 0.03362277436608043
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6127450980392157,
"acc_stderr": 0.03418931233833342,
"acc_norm": 0.6127450980392157,
"acc_norm_stderr": 0.03418931233833342
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6413502109704642,
"acc_stderr": 0.031219569445301833,
"acc_norm": 0.6413502109704642,
"acc_norm_stderr": 0.031219569445301833
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5605381165919282,
"acc_stderr": 0.03331092511038179,
"acc_norm": 0.5605381165919282,
"acc_norm_stderr": 0.03331092511038179
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5801526717557252,
"acc_stderr": 0.04328577215262971,
"acc_norm": 0.5801526717557252,
"acc_norm_stderr": 0.04328577215262971
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6528925619834711,
"acc_stderr": 0.04345724570292535,
"acc_norm": 0.6528925619834711,
"acc_norm_stderr": 0.04345724570292535
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5648148148148148,
"acc_stderr": 0.04792898170907061,
"acc_norm": 0.5648148148148148,
"acc_norm_stderr": 0.04792898170907061
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.5828220858895705,
"acc_stderr": 0.03874102859818081,
"acc_norm": 0.5828220858895705,
"acc_norm_stderr": 0.03874102859818081
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4017857142857143,
"acc_stderr": 0.04653333146973647,
"acc_norm": 0.4017857142857143,
"acc_norm_stderr": 0.04653333146973647
},
"harness|hendrycksTest-management|5": {
"acc": 0.6796116504854369,
"acc_stderr": 0.04620284082280042,
"acc_norm": 0.6796116504854369,
"acc_norm_stderr": 0.04620284082280042
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7649572649572649,
"acc_stderr": 0.027778835904935437,
"acc_norm": 0.7649572649572649,
"acc_norm_stderr": 0.027778835904935437
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6411238825031929,
"acc_stderr": 0.017152991797501342,
"acc_norm": 0.6411238825031929,
"acc_norm_stderr": 0.017152991797501342
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5,
"acc_stderr": 0.026919095102908273,
"acc_norm": 0.5,
"acc_norm_stderr": 0.026919095102908273
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23687150837988827,
"acc_stderr": 0.014219570788103986,
"acc_norm": 0.23687150837988827,
"acc_norm_stderr": 0.014219570788103986
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5032679738562091,
"acc_stderr": 0.028629305194003543,
"acc_norm": 0.5032679738562091,
"acc_norm_stderr": 0.028629305194003543
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5852090032154341,
"acc_stderr": 0.027982680459759567,
"acc_norm": 0.5852090032154341,
"acc_norm_stderr": 0.027982680459759567
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5123456790123457,
"acc_stderr": 0.027812262269327242,
"acc_norm": 0.5123456790123457,
"acc_norm_stderr": 0.027812262269327242
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.40070921985815605,
"acc_stderr": 0.029233465745573086,
"acc_norm": 0.40070921985815605,
"acc_norm_stderr": 0.029233465745573086
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3852672750977836,
"acc_stderr": 0.012429485434955182,
"acc_norm": 0.3852672750977836,
"acc_norm_stderr": 0.012429485434955182
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.47058823529411764,
"acc_stderr": 0.030320243265004137,
"acc_norm": 0.47058823529411764,
"acc_norm_stderr": 0.030320243265004137
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.4820261437908497,
"acc_stderr": 0.020214761037872408,
"acc_norm": 0.4820261437908497,
"acc_norm_stderr": 0.020214761037872408
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5346938775510204,
"acc_stderr": 0.03193207024425314,
"acc_norm": 0.5346938775510204,
"acc_norm_stderr": 0.03193207024425314
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6318407960199005,
"acc_stderr": 0.03410410565495302,
"acc_norm": 0.6318407960199005,
"acc_norm_stderr": 0.03410410565495302
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4578313253012048,
"acc_stderr": 0.038786267710023595,
"acc_norm": 0.4578313253012048,
"acc_norm_stderr": 0.038786267710023595
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6374269005847953,
"acc_stderr": 0.036871306155620606,
"acc_norm": 0.6374269005847953,
"acc_norm_stderr": 0.036871306155620606
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2962056303549572,
"mc1_stderr": 0.015983595101811392,
"mc2": 0.4472415883922134,
"mc2_stderr": 0.015128282783775687
},
"harness|winogrande|5": {
"acc": 0.6890292028413575,
"acc_stderr": 0.013009534736286058
},
"harness|gsm8k|5": {
"acc": 0.0576194086429113,
"acc_stderr": 0.006418593319822861
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
tner/ontonotes5 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Ontonotes5
---
# Dataset Card for "tner/ontonotes5"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/N06-2015/](https://aclanthology.org/N06-2015/)
- **Dataset:** Ontonotes5
- **Domain:** News
- **Number of Entity:** 8
### Dataset Summary
Ontonotes5 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `CARDINAL`, `DATE`, `PERSON`, `NORP`, `GPE`, `LAW`, `PERCENT`, `ORDINAL`, `MONEY`, `WORK_OF_ART`, `FAC`, `TIME`, `QUANTITY`, `PRODUCT`, `LANGUAGE`, `ORG`, `LOC`, `EVENT`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 5, 0, 0, 0, 0, 11, 12, 12, 12, 12, 0, 0, 7, 0, 0, 0, 0, 0],
'tokens': ['``', 'It', "'s", 'very', 'costly', 'and', 'time', '-', 'consuming', ',', "''", 'says', 'Phil', 'Rosen', ',', 'a', 'partner', 'in', 'Fleet', '&', 'Leasing', 'Management', 'Inc.', ',', 'a', 'Boston', 'car', '-', 'leasing', 'company', '.']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/onotonotes5/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-CARDINAL": 1,
"B-DATE": 2,
"I-DATE": 3,
"B-PERSON": 4,
"I-PERSON": 5,
"B-NORP": 6,
"B-GPE": 7,
"I-GPE": 8,
"B-LAW": 9,
"I-LAW": 10,
"B-ORG": 11,
"I-ORG": 12,
"B-PERCENT": 13,
"I-PERCENT": 14,
"B-ORDINAL": 15,
"B-MONEY": 16,
"I-MONEY": 17,
"B-WORK_OF_ART": 18,
"I-WORK_OF_ART": 19,
"B-FAC": 20,
"B-TIME": 21,
"I-CARDINAL": 22,
"B-LOC": 23,
"B-QUANTITY": 24,
"I-QUANTITY": 25,
"I-NORP": 26,
"I-LOC": 27,
"B-PRODUCT": 28,
"I-TIME": 29,
"B-EVENT": 30,
"I-EVENT": 31,
"I-FAC": 32,
"B-LANGUAGE": 33,
"I-PRODUCT": 34,
"I-ORDINAL": 35,
"I-LANGUAGE": 36
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|ontonotes5|59924| 8528|8262|
### Citation Information
```
@inproceedings{hovy-etal-2006-ontonotes,
title = "{O}nto{N}otes: The 90{\%} Solution",
author = "Hovy, Eduard and
Marcus, Mitchell and
Palmer, Martha and
Ramshaw, Lance and
Weischedel, Ralph",
booktitle = "Proceedings of the Human Language Technology Conference of the {NAACL}, Companion Volume: Short Papers",
month = jun,
year = "2006",
address = "New York City, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N06-2015",
pages = "57--60",
}
``` |
Paia2/raulbio2 | ---
license: openrail
---
|
lovesnowbest/T-Eval | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
- zh
tags:
- code
pretty_name: teval
size_categories:
- 100M<n<1B
---
# T-Eval: Evaluating the Tool Utilization Capability of Large Language Models Step by Step
[](https://arxiv.org/abs/2312.14033)
[](./LICENSE)
## ✨ Introduction
This is an evaluation harness for the benchmark described in [T-Eval: Evaluating the Tool Utilization Capability of Large Language Models Step by Step](https://arxiv.org/abs/2312.14033).
[[Paper](https://arxiv.org/abs/2312.14033)]
[[Project Page](https://open-compass.github.io/T-Eval/)]
[[LeaderBoard](https://open-compass.github.io/T-Eval/leaderboard.html)]
[[HuggingFace](https://huggingface.co/datasets/lovesnowbest/T-Eval)]
> Large language models (LLM) have achieved remarkable performance on various NLP tasks and are augmented by tools for broader applications. Yet, how to evaluate and analyze the tool utilization capability of LLMs is still under-explored. In contrast to previous works that evaluate models holistically, we comprehensively decompose the tool utilization into multiple sub-processes, including instruction following, planning, reasoning, retrieval, understanding, and review. Based on that, we further introduce T-Eval to evaluate the tool-utilization capability step by step. T-Eval disentangles the tool utilization evaluation into several sub-domains along model capabilities, facilitating the inner understanding of both holistic and isolated competency of LLMs. We conduct extensive experiments on T-Eval and in-depth analysis of various LLMs. T-Eval not only exhibits consistency with the outcome-oriented evaluation but also provides a more fine-grained analysis of the capabilities of LLMs, providing a new perspective in LLM evaluation on tool-utilization ability.
<!--
[T-Eval: ]()<br>
Zehui Chen<sup>♠</sup>, Weihua Du<sup>♠</sup>, Wenwei Zhang<sup>♠</sup>, Kuikun Liu, Jiangning Liu, Miao Zheng, Jingming Zhuo, Songyang Zhang, Dahua Lin, Kai Chen<sup>♦</sup>, Feng Zhao<sup>♦</sup>
<sup>♠</sup> Equal Contribution<br>
<sup>♦</sup> Corresponding Author -->
<div>
<center>
<img src="figs/teaser.png">
</div>
## 🚀 What's New
- **[2024.02.18]** Release new [data](https://drive.google.com/file/d/1nQ0pn26qd0FGU8UkfSTxNdu6uWI0QXTY/view?usp=sharing) (both Chinese and English) and code for faster inference!🚀🚀🚀 The leaderboard will be updated soon! We also provide template examples for reference.
- **[2024.01.08]** Release [ZH Leaderboard](https://open-compass.github.io/T-Eval/leaderboard_zh.html) and ~~[ZH data](https://drive.google.com/file/d/1z25duwZAnBrPN5jYu9-8RMvfqnwPByKV/view?usp=sharing)~~, where the questions and answer formats are in Chinese. (公布了中文评测数据集和榜单)✨✨✨
- **[2023.12.22]** Paper available on [ArXiv](https://arxiv.org/abs/2312.14033). 🔥🔥🔥
- **[2023.12.21]** Release the test scripts and data for T-Eval. 🎉🎉🎉
## 🧾 TODO
- [x] Change the role of function response from `system` to `function`.
- [x] Merge consecutive same role conversations.
- [x] Provide template configs for open-sourced models.
- [x] Provide dev set for T-Eval, reducing the evaluation time.
- [x] Optimize the inference pipeline of huggingface model provided by Lagent, which will be 3x faster. **(Please upgrade Lagent to v0.2)**
- [ ] Support inference on Opencompass.
~~NOTE: These TODOs will be started after 2024.2.1~~ Thanks for your patience!
## 🛠️ Preparations
```bash
$ git clone https://github.com/open-compass/T-Eval.git
$ cd T-Eval
$ pip install requirements.txt
```
## 🛫️ Get Started
We support both API-based models and HuggingFace models via [Lagent](https://github.com/InternLM/lagent).
### 💾 Test Data
We provide both google drive & huggingface dataset to download test data:
1. Google Drive
~~[[EN data](https://drive.google.com/file/d/1ebR6WCCbS9-u2x7mWpWy8wV_Gb6ltgpi/view?usp=sharing)] (English format) [[ZH data](https://drive.google.com/file/d/1z25duwZAnBrPN5jYu9-8RMvfqnwPByKV/view?usp=sharing)] (Chinese format)~~
[T-Eval Data](https://drive.google.com/file/d/1nQ0pn26qd0FGU8UkfSTxNdu6uWI0QXTY/view?usp=sharing)
2. HuggingFace Datasets
You can also access the dataset through huggingface via this [link](https://huggingface.co/datasets/lovesnowbest/T-Eval).
```python
from datasets import load_dataset
dataset = load_dataset("lovesnowbest/T-Eval")
```
After downloading, please put the data in the `data` folder directly:
```
- data/
- instruct_v2.json
- plan_json_v2.json
...
```
### 🤖 API Models
1. Set your OPENAI key in your environment.
```bash
export OPENAI_API_KEY=xxxxxxxxx
```
2. Run the model with the following scripts
```bash
# test all data at once
sh test_all_en.sh api gpt-4-1106-preview gpt4
# test ZH dataset
sh test_all_zh.sh api gpt-4-1106-preview gpt4
# test for Instruct only
python test.py --model_type api --model_path gpt-4-1106-preview --resume --out_name instruct_gpt4.json --out_dir work_dirs/gpt4/ --dataset_path data/instruct_v2.json --eval instruct --prompt_type json
```
### 🤗 HuggingFace Models
1. Download the huggingface model to your local path.
2. Modify the `meta_template` json according to your tested model.
3. Run the model with the following scripts
```bash
# test all data at once
sh test_all_en.sh hf $HF_PATH $HF_MODEL_NAME $META_TEMPLATE
# test ZH dataset
sh test_all_zh.sh hf $HF_PATH $HF_MODEL_NAME $META_TEMPLATE
# test for Instruct only
python test.py --model_type hf --model_path $HF_PATH --resume --out_name instruct_$HF_MODEL_NAME.json --out_dir data/work_dirs/ --dataset_path data/instruct_v1.json --eval instruct --prompt_type json --model_display_name $HF_MODEL_NAME --meta_template $META_TEMPLATE
```
### 💫 Final Results
Once you finish all tested samples, a detailed evluation results will be logged at `$out_dir/$model_display_name/$model_display_name_-1.json` (For ZH dataset, there is a `_zh` suffix). To obtain your final score, please run the following command:
```bash
python teval/utils/convert_results.py --result_path $out_dir/$model_display_name/$model_display_name_-1.json
```
## 🔌 Protocols
T-Eval adopts multi-conversation style evaluation to gauge the model. The format of our saved prompt is as follows:
```python
[
{
"role": "system",
"content": "You have access to the following API:\n{'name': 'AirbnbSearch.search_property_by_place', 'description': 'This function takes various parameters to search properties on Airbnb.', 'required_parameters': [{'name': 'place', 'type': 'STRING', 'description': 'The name of the destination.'}], 'optional_parameters': [], 'return_data': [{'name': 'property', 'description': 'a list of at most 3 properties, containing id, name, and address.'}]}\nPlease generate the response in the following format:\ngoal: goal to call this action\n\nname: api name to call\n\nargs: JSON format api args in ONLY one line\n"
},
{
"role": "user",
"content": "Call the function AirbnbSearch.search_property_by_place with the parameter as follows: 'place' is 'Berlin'."
}
]
```
where `role` can be ['system', 'user', 'assistant'], and `content` must be in string format. Before infering it by a LLM, we need to construct it into a raw string format via `meta_template`. `meta_template` examples are provided at [meta_template.py](teval/utils/meta_template.py):
```python
[
dict(role='system', begin='<|System|>:', end='\n'),
dict(role='user', begin='<|User|>:', end='\n'),
dict(
role='assistant',
begin='<|Bot|>:',
end='<eoa>\n',
generate=True)
]
```
You need to specify the `begin` and `end` token based on your tested huggingface model at [meta_template.py](teval/utils/meta_template.py) and specify the `meta_template` args in `test.py`, same as the name you set in the `meta_template.py`. As for OpenAI model, we will handle that for you.
## 📊 Benchmark Results
More detailed and comprehensive benchmark results can refer to 🏆 [T-Eval official leaderboard](https://open-compass.github.io/T-Eval/leaderboard.html) !
<div>
<center>
<img src="figs/teval_results.png">
</div>
### ✉️ Submit Your Results
You can submit your inference results (via running test.py) to this [email](lovesnow@mail.ustc.edu.cn). We will run your predictions and update the results in our leaderboard. Please also provide the scale of your tested model. A sample structure of your submission should be like:
```
$model_display_name/
instruct_$model_display_name/
query_0_1_0.json
query_0_1_1.json
...
plan_json_$model_display_name/
plan_str_$model_display_name/
...
```
## ❤️ Acknowledgements
T-Eval is built with [Lagent](https://github.com/InternLM/lagent) and [OpenCompass](https://github.com/open-compass/opencompass). Thanks for their awesome work!
## 🖊️ Citation
If you find this project useful in your research, please consider cite:
```
@article{chen2023t,
title={T-Eval: Evaluating the Tool Utilization Capability Step by Step},
author={Chen, Zehui and Du, Weihua and Zhang, Wenwei and Liu, Kuikun and Liu, Jiangning and Zheng, Miao and Zhuo, Jingming and Zhang, Songyang and Lin, Dahua and Chen, Kai and others},
journal={arXiv preprint arXiv:2312.14033},
year={2023}
}
```
## 💳 License
This project is released under the Apache 2.0 [license](./LICENSE). |
Artur22823/Whisper | ---
license: mpl-2.0
---
|
nihaomur/breeze7B_intruct_med_tokenized | ---
dataset_info:
features:
- name: output
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 170464820
num_examples: 100000
download_size: 60997983
dataset_size: 170464820
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Shivanikumar/Reformatted_dataset | ---
dataset_info:
features:
- name: Questions
dtype: string
- name: Answers
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 285128
num_examples: 299
download_size: 139255
dataset_size: 285128
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/054e2211 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 188
num_examples: 10
download_size: 1338
dataset_size: 188
---
# Dataset Card for "054e2211"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
g-ronimo/oasst2_top4k_en | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 7744472.411884111
num_examples: 4000
- name: test
num_bytes: 774447.2411884111
num_examples: 400
download_size: 4492003
dataset_size: 8518919.653072523
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "oasst2_top4k_en"
* Top ranked conversations of https://huggingface.co/datasets/OpenAssistant/oasst2
* Deduplicated, excluded all entried where any two long (>100 chars) answers are similar (cosine sim.>0.8)
* Splits: train (4000 entries), test (400 entries)
* language-filtered: en
* generated using https://github.com/blancsw/deep_4_all/blob/main/datasets/oasst/convert.py
* sequence length distribution (X=num. of tokens, Y=frequency)

|
wikipunk/fibo2023Q3 | ---
language:
- en
license: mit
tags:
- knowledge-graph
- rdf
- owl
- ontology
annotations_creators:
- expert-generated
pretty_name: FIBO
size_categories:
- 100K<n<1M
task_categories:
- graph-ml
dataset_info:
features:
- name: subject
dtype: string
- name: predicate
dtype: string
- name: object
dtype: string
config_name: default
splits:
- name: train
num_bytes: 56045523
num_examples: 236579
dataset_size: 56045523
viewer: false
---
# FIBO: The Financial Industry Business Ontology
### Overview
In the world of financial technology, the vastness of data and the
complexity of financial instruments present both challenges and
opportunities. The Financial Industry Business Ontology (FIBO) offers
a structured framework that bridges the gap between theoretical
financial concepts and real-world data. I believe machine learning
researchers interested in the financial sector could use the
relationships in FIBO to innovate in financial feature engineering to
fine-tune existing models or build new ones.
#### Open Source
The FIBO ontology is developed on GitHub at
https://github.com/edmcouncil/fibo/.
### Use-cases
- Comprehensive Data Structure: FIBO offers a broad spectrum of
financial concepts, ranging from derivatives to securities. This
design, rooted in expert knowledge from both the knowledge
representation and financial sectors, ensures a profound
understanding of financial instruments.
- Decoding Complex Relationships: The financial domain is
characterized by its intricate interdependencies. FIBO's structured
approach provides clarity on these relationships, enabling machine
learning algorithms to identify patterns and correlations within
large datasets.
- Linkage with Real-world Data: A distinguishing feature of FIBO is
its capability to associate financial concepts with real-world
financial data and controlled vocabularies. This connection is
crucial for researchers aiming to apply theoretical insights in
practical contexts in financial enterprises with their existing
data.
- Retrieval Augmented Generation: The advent of Large Language Models,
particularly in conjunction with Retrieval Augmented Generation
(RAG), holds promise for revolutionizing the way financial data is
processed and interpreted.
- Document Classification: With the surge in financial documents,
utilizing RAG to categorize financial datasets classifed by FIBO
concepts can assist financial analysts in achieving enhanced
accuracy and depth in data interpretation, facilitated by
intelligent prompting.
#### Building and Verification:
1. **Construction**: The ontology was imported from
[AboutFIBOProd-IncludingReferenceData](https://github.com/edmcouncil/fibo/blob/master/AboutFIBOProd-IncludingReferenceData.rdf)
into Protege version 5.6.1.
2. **Reasoning**: Due to the large size of the ontology I used the ELK
reasoner plugin to materialize (make explicit) inferences in the
ontology.
3. **Coherence Check**: The Debug Ontology plugin in Protege was used
to ensure the ontology's coherence and consistency.
4. **Export**: After verification, inferred axioms, along with
asserted axioms and annotations, were [exported using Protege](https://www.michaeldebellis.com/post/export-inferred-axioms).
5. **Encoding and Compression**: [Apache Jena's
riot](https://jena.apache.org/documentation/tools/) was used to convert the
result to ntriples, which was then compressed with gzip. This
compressed artifact is downloaded and extracted by the Hugging Face
datasets library to yield the examples in the dataset.
### Usage
First make sure you have the requirements installed:
```python
pip install datasets
pip install rdflib
```
You can load the dataset using the Hugging Face Datasets library with the following Python code:
```python
from datasets import load_dataset
dataset = load_dataset('wikipunk/fibo2023Q3', split='train')
```
## Features
The FIBO dataset is composed of triples representing the relationships
between different financial concepts and named individuals such as
market participants, corporations, and contractual agents.
#### Note on Format:
The subject, predicate, and object features are stored in N3 notation
with no prefix mappings. This allows users to parse each component
using `rdflib.util.from_n3` from the RDFLib Python library.
### 1. **Subject** (`string`)
The subject of a triple is the primary entity or focus of the statement. In this dataset, the subject often represents a specific financial instrument or entity. For instance:
`<https://spec.edmcouncil.org/fibo/ontology/SEC/Equities/EquitiesExampleIndividuals/XNYSListedTheCoca-ColaCompanyCommonStock>`
refers to the common stock of The Coca-Cola Company that is listed on
the NYSE.
### 2. **Predicate** (`string`)
The predicate of a triple indicates the nature of the relationship between the subject and the object. It describes a specific property, characteristic, or connection of the subject. In our example:
`<https://spec.edmcouncil.org/fibo/ontology/SEC/Securities/SecuritiesListings/isTradedOn>`
signifies that the financial instrument (subject) is traded on a
particular exchange (object).
### 3. **Object** (`string`)
The object of a triple is the entity or value that is associated with the subject via the predicate. It can be another financial concept, a trading platform, or any other related entity. In the context of our example:
`<https://spec.edmcouncil.org/fibo/ontology/FBC/FunctionalEntities/NorthAmericanEntities/USMarketsAndExchangesIndividuals/NewYorkStockExchange>`
represents the New York Stock Exchange where the aforementioned
Coca-Cola common stock is traded.
#### Continued
Here is an another example of a triple in the dataset:
- Subject: `"<https://spec.edmcouncil.org/fibo/ontology/FBC/FunctionalEntities/MarketsIndividuals/ServiceProvider-L-JEUVK5RWVJEN8W0C9M24>"`
- Predicate: `"<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>`
- Object: `"<https://spec.edmcouncil.org/fibo/ontology/BE/FunctionalEntities/FunctionalEntities/FunctionalEntity>"`
This triple represents the statement that the market individual
[ServiceProvider-L-JEUVK5RWVJEN8W0C9M24](https://spec.edmcouncil.org/fibo/ontology/FBC/FunctionalEntities/MarketsIndividuals/ServiceProvider-L-JEUVK5RWVJEN8W0C9M24)
has a type of
[FunctionalEntity](https://spec.edmcouncil.org/fibo/ontology/BE/FunctionalEntities/FunctionalEntities/FunctionalEntity).
#### Note:
The dataset contains example individuals from the ontology as
reference points. These examples provide a structured framework for
understanding the relationships and entities within the financial
domain. However, the individuals included are not exhaustive. With
advancements in Large Language Models, especially Retrieval Augmented
Generation (RAG), there's potential to generate and expand upon these
examples, enriching the dataset with more structured data and
insights.
### FIBO Viewer
Use the [FIBO Viewer](https://spec.edmcouncil.org/fibo/ontology) to
explore the ontology on the web. One of the coolest features about
FIBO is that entities with a prefix of
https://spec.edmcouncil.org/fibo/ontology/ can be looked up in the web
just by opening its URL in a browser or in any HTTP client.
## Ideas for Deriving Graph Neural Network Features from FIBO:
Graph Neural Networks (GNNs) have emerged as a powerful tool for
machine learning on structured data. FIBO, with its structured
ontology, can be leveraged to derive features for GNNs.
### Node Features:
- **rdf:type**: Each entity in FIBO has one or more associated `rdf:type`,
`<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>`, that
indicates its class or category. This can serve as a primary node
feature to encode.
- **Entity Attributes**: Attributes of each entity, such as names or
descriptions, can be used as additional node features. Consider
embedding descriptions using a semantic text embedding model.
### Edge Features:
- **RDF Predicates**: The relationships between entities in FIBO are
represented using RDF predicates. These predicates can serve as edge
features in a GNN, capturing the nature of the relationship between
nodes.
### Potential Applications:
1. **Entity Classification**: Using the derived node and edge
features, GNNs can classify entities into various financial
categories, enhancing the granularity of financial data analysis.
2. **Relationship Prediction**: GNNs can predict potential
relationships between entities, aiding in the discovery of hidden
patterns or correlations within the financial data.
3. **Anomaly Detection**: By training GNNs on the structured data from
FIBO and interlinked financial datasets, anomalies or
irregularities in them may be detected, ensuring data integrity and
accuracy.
### Acknowledgements
We extend our sincere gratitude to the FIBO contributors for their
meticulous efforts in knowledge representation. Their expertise and
dedication have been instrumental in shaping a comprehensive and
insightful framework that serves as a cornerstone for innovation in
the financial industry.
If you are interested in modeling the financial industry you should
consider [contributing to
FIBO](https://github.com/edmcouncil/fibo/blob/master/CONTRIBUTING.md).
### Citation
```bibtex
@misc{fibo2023Q3,
title={Financial Industry Business Ontology (FIBO)},
author={Object Management Group, Inc. and EDM Council, Inc. and Various Contributors},
year={2023},
note={Available as OWL 2 ontologies and UML models compliant with the Semantics for Information Modeling and Federation (SMIF) draft specification. Contributions are open on GitHub, consult the repository for a list of contributors.},
howpublished={\url{https://spec.edmcouncil.org/fibo/}},
abstract={The Financial Industry Business Ontology (FIBO) is a collaborative effort to standardize the language used to define the terms, conditions, and characteristics of financial instruments; the legal and relationship structure of business entities; the content and time dimensions of market data; and the legal obligations and process aspects of corporate actions.},
license={MIT License, \url{https://opensource.org/licenses/MIT}}
}
```
|
KimKimzed/dataset_repository_name | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ImagenHub/Subject_Driven_Image_Editing | ---
configs:
- config_name: default
data_files:
- split: eval
path: data/eval-*
- split: extra
path: data/extra-*
dataset_info:
features:
- name: uid
dtype: int64
- name: image
dtype: image
- name: subject
dtype: string
splits:
- name: eval
num_bytes: 4414578.0
num_examples: 154
- name: extra
num_bytes: 1779741.0
num_examples: 66
download_size: 6179822
dataset_size: 6194319.0
---
# Dataset Card
Dataset in [ImagenHub](arxiv.org/abs/2310.01596).
# Citation
Please kindly cite our paper if you use our code, data, models or results:
```
@article{ku2023imagenhub,
title={ImagenHub: Standardizing the evaluation of conditional image generation models},
author={Max Ku and Tianle Li and Kai Zhang and Yujie Lu and Xingyu Fu and Wenwen Zhuang and Wenhu Chen},
journal={arXiv preprint arXiv:2310.01596},
year={2023}
}
``` |
tanzirghumay/425 | ---
license: apache-2.0
---
|
kanishka/counterfactual_babylm_prototypical_only | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 581827546
num_examples: 11633845
- name: validation
num_bytes: 56120230
num_examples: 1026747
download_size: 421677407
dataset_size: 637947776
---
# Dataset Card for "counterfactual_babylm_prototypical_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/cyberpunkedgerunners | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Cyberpunk: Edgerunners
This is the image base of bangumi Cyberpunk: Edgerunners, we detected 21 characters, 1227 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 48 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 81 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 333 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 26 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 41 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 20 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 58 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 111 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 48 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 32 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 28 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 25 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 17 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 28 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 22 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 19 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 32 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 19 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 24 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 9 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 206 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
noxneural/orcapaca_albanian | ---
task_categories:
- question-answering
language:
- sq
pretty_name: AlbanianPaca
size_categories:
- 10K<n<100K
---
|
phiyodr/coco2017 | ---
language:
- en
pretty_name: COCO2017
size_categories:
- 100K<n<1M
task_categories:
- image-to-text
task_ids:
- image-captioning
tags:
- coco
- image-captioning
dataset_info:
features:
- name: license
dtype: int64
- name: file_name
dtype: string
- name: coco_url
dtype: string
- name: height
dtype: int64
- name: width
dtype: int64
- name: date_captured
dtype: string
- name: flickr_url
dtype: string
- name: image_id
dtype: int64
- name: ids
sequence: int64
- name: captions
sequence: string
splits:
- name: train
num_bytes: 64026361
num_examples: 118287
- name: validation
num_bytes: 2684731
num_examples: 5000
download_size: 30170127
dataset_size: 66711092
---
# coco2017
Image-text pairs from [MS COCO2017](https://cocodataset.org/#download).
## Data origin
* Data originates from [cocodataset.org](http://images.cocodataset.org/annotations/annotations_trainval2017.zip)
* While `coco-karpathy` uses a dense format (with several sentences and sendids per row), `coco-karpathy-long` uses a long format with one `sentence` (aka caption) and `sendid` per row. `coco-karpathy-long` uses the first five sentences and therefore is five times as long as `coco-karpathy`.
* `phiyodr/coco2017`: One row corresponds one image with several sentences.
* `phiyodr/coco2017-long`: One row correspond one sentence (aka caption). There are 5 rows (sometimes more) with the same image details.
## Format
```python
DatasetDict({
train: Dataset({
features: ['license', 'file_name', 'coco_url', 'height', 'width', 'date_captured', 'flickr_url', 'image_id', 'ids', 'captions'],
num_rows: 118287
})
validation: Dataset({
features: ['license', 'file_name', 'coco_url', 'height', 'width', 'date_captured', 'flickr_url', 'image_id', 'ids', 'captions'],
num_rows: 5000
})
})
```
## Usage
* Download image data and unzip
```bash
cd PATH_TO_IMAGE_FOLDER
wget http://images.cocodataset.org/zips/train2017.zip
wget http://images.cocodataset.org/zips/val2017.zip
#wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip # zip not needed: everything you need is in load_dataset("phiyodr/coco2017")
unzip train2017.zip
unzip val2017.zip
```
* Load dataset in Python
```python
import os
from datasets import load_dataset
PATH_TO_IMAGE_FOLDER = "COCO2017"
def create_full_path(example):
"""Create full path to image using `base_path` to COCO2017 folder."""
example["image_path"] = os.path.join(PATH_TO_IMAGE_FOLDER, example["file_name"])
return example
dataset = load_dataset("phiyodr/coco2017")
dataset = dataset.map(create_full_path)
``` |
Sk4372/Try | ---
license: openrail
---
|
Glac1er/holdinwpn | ---
license: unknown
---
|
marcus2000/new_sentiment | ---
dataset_info:
features:
- name: text
dtype: string
- name: sentiment
dtype: string
splits:
- name: train
num_bytes: 9012929
num_examples: 6195
download_size: 4355943
dataset_size: 9012929
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "new_sentiment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
venetis/VMMRdb_make_model | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': acura_cl
'1': acura_integra
'2': acura_legend
'3': acura_mdx
'4': acura_rdx
'5': acura_rl
'6': acura_rsx
'7': acura_tl
'8': acura_tsx
'9': audi_a3
'10': audi_a4
'11': audi_a6
'12': audi_a8
'13': audi_s4
'14': audi_tt
'15': bmw_323i
'16': bmw_325i
'17': bmw_328i
'18': bmw_330ci
'19': bmw_330i
'20': bmw_335i
'21': bmw_525i
'22': bmw_528i
'23': bmw_530i
'24': bmw_535i
'25': bmw_540i
'26': bmw_545i
'27': bmw_550i
'28': bmw_740i
'29': bmw_745i
'30': bmw_750i
'31': bmw_m3
'32': bmw_m5
'33': bmw_x3
'34': bmw_x5
'35': bmw_z3
'36': bmw_z4
'37': buick_century
'38': buick_enclave
'39': buick_lacrosse
'40': buick_lesabre
'41': buick_lucerne
'42': buick_parkavenue
'43': buick_regal
'44': buick_rendezvous
'45': buick_riviera
'46': cadillac_catera
'47': cadillac_cts
'48': cadillac_deville
'49': cadillac_eldorado
'50': cadillac_escalade
'51': cadillac_seville
'52': cadillac_srx
'53': cadillac_sts
'54': chevrolet_astro
'55': chevrolet_avalanche
'56': chevrolet_aveo
'57': chevrolet_bel air
'58': chevrolet_blazer
'59': chevrolet_c-k1500
'60': chevrolet_c10
'61': chevrolet_camaro
'62': chevrolet_caprice
'63': chevrolet_cavalier
'64': chevrolet_chevelle
'65': chevrolet_cobalt
'66': chevrolet_colorado
'67': chevrolet_corvette
'68': chevrolet_cruze
'69': chevrolet_el camino
'70': chevrolet_equinox
'71': chevrolet_express
'72': chevrolet_hhr
'73': chevrolet_impala
'74': chevrolet_lumina
'75': chevrolet_malibu
'76': chevrolet_montecarlo
'77': chevrolet_nova
'78': chevrolet_prizm
'79': chevrolet_s10
'80': chevrolet_silverado
'81': chevrolet_sonic
'82': chevrolet_suburban
'83': chevrolet_tahoe
'84': chevrolet_tracker
'85': chevrolet_trailblazer
'86': chevrolet_traverse
'87': chevrolet_uplander
'88': chevrolet_venture
'89': chrysler_200
'90': chrysler_300
'91': chrysler_concorde
'92': chrysler_crossfire
'93': chrysler_pacifica
'94': chrysler_pt cruiser
'95': chrysler_sebring
'96': chrysler_town&country
'97': chrysler_voyager
'98': dodge_avenger
'99': dodge_caliber
'100': dodge_challenger
'101': dodge_charger
'102': dodge_dakota
'103': dodge_dart
'104': dodge_durango
'105': dodge_grand caravan
'106': dodge_intrepid
'107': dodge_journey
'108': dodge_magnum
'109': dodge_neon
'110': dodge_nitro
'111': dodge_ram
'112': dodge_stratus
'113': fiat_five hundred
'114': ford_bronco
'115': ford_contour
'116': ford_crown victoria
'117': ford_e150
'118': ford_e250
'119': ford_e350
'120': ford_edge
'121': ford_escape
'122': ford_escort
'123': ford_excursion
'124': ford_expedition
'125': ford_explorer
'126': ford_f100
'127': ford_f150
'128': ford_f250
'129': ford_f350
'130': ford_f450
'131': ford_fiesta
'132': ford_five hundred
'133': ford_focus
'134': ford_freestar
'135': ford_fusion
'136': ford_mustang
'137': ford_ranger
'138': ford_taurus
'139': ford_thunderbird
'140': ford_windstar
'141': gmc_acadia
'142': gmc_canyon
'143': gmc_envoy
'144': gmc_jimmy
'145': gmc_sierra
'146': gmc_sonoma
'147': gmc_suburban
'148': gmc_terrain
'149': gmc_yukon
'150': honda_accord
'151': honda_civic
'152': honda_cr-v
'153': honda_delsol
'154': honda_element
'155': honda_fit
'156': honda_odyssey
'157': honda_passport
'158': honda_pilot
'159': honda_prelude
'160': honda_ridgeline
'161': honda_s2000
'162': hummer_h2
'163': hummer_h3
'164': hyundai_accent
'165': hyundai_azera
'166': hyundai_elantra
'167': hyundai_genesis
'168': hyundai_santafe
'169': hyundai_sonata
'170': hyundai_tiburon
'171': hyundai_tucson
'172': infiniti_fx35
'173': infiniti_g35
'174': infiniti_g37
'175': infiniti_i30
'176': infiniti_i35
'177': infiniti_m35
'178': infiniti_q45
'179': infiniti_qx4
'180': infiniti_qx56
'181': isuzu_rodeo
'182': isuzu_trooper
'183': jaguar_s-type
'184': jaguar_x-type
'185': jaguar_xj
'186': jeep_cherokee
'187': jeep_cj5
'188': jeep_cj7
'189': jeep_commander
'190': jeep_compass
'191': jeep_grand
'192': jeep_liberty
'193': jeep_patriot
'194': jeep_wrangler
'195': kia_amanti
'196': kia_forte
'197': kia_optima
'198': kia_rio
'199': kia_sedona
'200': kia_sephia
'201': kia_sorento
'202': kia_soul
'203': kia_spectra
'204': kia_sportage
'205': landrover_discovery
'206': landrover_rangerover
'207': lexus_es300
'208': lexus_es330
'209': lexus_es350
'210': lexus_gs300
'211': lexus_gx470
'212': lexus_is250
'213': lexus_is300
'214': lexus_is350
'215': lexus_ls400
'216': lexus_ls430
'217': lexus_rx300
'218': lexus_rx330
'219': lexus_sc430
'220': lincoln_aviator
'221': lincoln_continental
'222': lincoln_ls
'223': lincoln_mark
'224': lincoln_mkx
'225': lincoln_mkz
'226': lincoln_navigator
'227': lincoln_towncar
'228': mazda_3
'229': mazda_5
'230': mazda_6
'231': mazda_626
'232': mazda_millenia
'233': mazda_mpv
'234': mazda_mx5
'235': mazda_protege
'236': mazda_rx7
'237': mazda_rx8
'238': mazda_tribute
'239': mercedes benz_c230
'240': mercedes benz_c240
'241': mercedes benz_c280
'242': mercedes benz_c300
'243': mercedes benz_c320
'244': mercedes benz_clk320
'245': mercedes benz_e320
'246': mercedes benz_e350
'247': mercedes benz_e500
'248': mercedes benz_ml320
'249': mercedes benz_ml350
'250': mercedes benz_ml500
'251': mercedes benz_s430
'252': mercedes benz_s500
'253': mercedes benz_s550
'254': mercedes benz_sl500
'255': mercury_cougar
'256': mercury_grandmarquis
'257': mercury_mariner
'258': mercury_milan
'259': mercury_mountaineer
'260': mercury_sable
'261': mercury_villager
'262': mini_cooper
'263': mitsubishi_3000gt
'264': mitsubishi_eclipse
'265': mitsubishi_endeavor
'266': mitsubishi_galant
'267': mitsubishi_lancer
'268': mitsubishi_mirage
'269': mitsubishi_montero
'270': mitsubishi_outlander
'271': nissan_240sx
'272': nissan_300zx
'273': nissan_350z
'274': nissan_altima
'275': nissan_armada
'276': nissan_frontier
'277': nissan_maxima
'278': nissan_murano
'279': nissan_pathfinder
'280': nissan_quest
'281': nissan_rogue
'282': nissan_sentra
'283': nissan_titan
'284': nissan_versa
'285': nissan_xterra
'286': oldsmobile_alero
'287': oldsmobile_aurora
'288': oldsmobile_bravada
'289': oldsmobile_cutlass
'290': oldsmobile_intrigue
'291': oldsmobile_silhouette
'292': plymouth_neon
'293': plymouth_voyager
'294': pontiac_bonneville
'295': pontiac_firebird
'296': pontiac_g5
'297': pontiac_g6
'298': pontiac_grandam
'299': pontiac_grandprix
'300': pontiac_gto
'301': pontiac_montana
'302': pontiac_sunfire
'303': pontiac_torrent
'304': pontiac_transam
'305': pontiac_vibe
'306': porsche_911
'307': porsche_boxster
'308': porsche_cayenne
'309': ram_1500
'310': saab_9-3
'311': saab_9-5
'312': saturn_aura
'313': saturn_ion
'314': saturn_l200
'315': saturn_l300
'316': saturn_sl1
'317': saturn_sl2
'318': saturn_vue
'319': scion_tc
'320': scion_xa
'321': scion_xb
'322': scion_xd
'323': smart_fortwo
'324': subaru_forester
'325': subaru_impreza
'326': subaru_legacy
'327': subaru_outback
'328': subaru_wrx
'329': suzuki_forenza
'330': suzuki_sx4
'331': suzuki_xl7
'332': toyota_4runner
'333': toyota_avalon
'334': toyota_camry
'335': toyota_celica
'336': toyota_corolla
'337': toyota_echo
'338': toyota_fjcruiser
'339': toyota_highlander
'340': toyota_landcruiser
'341': toyota_matrix
'342': toyota_mr2
'343': toyota_pickup
'344': toyota_prius
'345': toyota_rav4
'346': toyota_sequoia
'347': toyota_sienna
'348': toyota_solara
'349': toyota_supra
'350': toyota_t100
'351': toyota_tacoma
'352': toyota_tercel
'353': toyota_tundra
'354': toyota_yaris
'355': volkswagen_beetle
'356': volkswagen_bug
'357': volkswagen_cc
'358': volkswagen_eos
'359': volkswagen_golf
'360': volkswagen_gti
'361': volkswagen_jetta
'362': volkswagen_newbeetle
'363': volkswagen_passat
'364': volkswagen_rabbit
'365': volkswagen_touareg
'366': volvo_850
'367': volvo_c70
'368': volvo_s40
'369': volvo_s60
'370': volvo_s70
'371': volvo_s80
'372': volvo_v70
'373': volvo_xc70
'374': volvo_xc90
splits:
- name: train
num_bytes: 4989542222.5
num_examples: 268516
download_size: 4979329257
dataset_size: 4989542222.5
---
# Dataset Card for "VMMRdb_make_model"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CVasNLPExperiments/VQAv2_validation_no_image_google_flan_t5_xxl_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_2000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_
num_bytes: 282708
num_examples: 2000
download_size: 100307
dataset_size: 282708
---
# Dataset Card for "VQAv2_validation_no_image_google_flan_t5_xxl_mode_A_T_D_PNP_FILTER_C_Q_rices_ns_2000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlphaRish/Sample_dataset | ---
dataset_info:
features:
- name: name
dtype: string
- name: description
dtype: string
- name: ad
dtype: string
splits:
- name: train
num_bytes: 3032
num_examples: 5
download_size: 7301
dataset_size: 3032
---
# Dataset Card for "Sample_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eber/autotrain-data-disparities_pubmed_mit | ---
language:
- en
task_categories:
- text-classification
---
# AutoTrain Dataset for project: disparities_pubmed_mit
## Dataset Description
This dataset has been automatically processed by AutoTrain for project disparities_pubmed_mit.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "IDH1-R132H acts as a tumor suppressor in glioma via epigenetic up-regulation of the DNA damage response. Patients with glioma whose tumors carry a mutation in isocitrate dehydrogenase 1 (IDH1(R132H)) are younger at diagnosis and live longer. IDH1 mutations co-occur with other molecular lesions, such as 1p/19q codeletion, inactivating mutations in the tumor suppressor protein 53 (TP53) gene, and loss-of-function mutations in alpha thalassemia/mental retardation syndrome X-linked gene (ATRX). All adult low-grade gliomas (LGGs) harboring ATRX loss also express the IDH1(R132H) mutation. The current molecular classification of LGGs is based, partly, on the distribution of these mutations. We developed a genetically engineered mouse model harboring IDH1(R132H), TP53 and ATRX inactivating mutations, and activated NRAS G12V. Previously, we established that ATRX deficiency, in the context of wild-type IDH1, induces genomic instability, impairs nonhomologous end-joining DNA repair, and increases sensitivity to DNA-damaging therapies. In this study, using our mouse model and primary patient-derived glioma cultures with IDH1 mutations, we investigated the function of IDH1(R132H) in the context of TP53 and ATRX loss. We discovered that IDH1(R132H) expression in the genetic context of ATRX and TP53 gene inactivation (i) increases median survival in the absence of treatment, (ii) enhances DNA damage response (DDR) via epigenetic up-regulation of the ataxia-telangiectasia-mutated (ATM) signaling pathway, and (iii) elicits tumor radioresistance. Accordingly, pharmacological inhibition of ATM or checkpoint kinases 1 and 2, essential kinases in the DDR, restored the tumors' radiosensitivity. Translation of these findings to patients with IDH1(132H) glioma harboring TP53 and ATRX loss could improve the therapeutic efficacy of radiotherapy and, consequently, patient survival.",
"target": 0
},
{
"text": "Activation of prolyl hydroxylase-2 for stabilization of mitochondrial stress along with simultaneous downregulation of HIF-1\u00ce\u00b1/FASN in ER\u00c2\u00a0+\u00c2\u00a0breast cancer subtype. The present study was undertaken to inquest the chemical activation of prolyl hydroxylase-2 for the curtailment of hypoxia-inducible factor-1\u00ce\u00b1 and fatty acid synthase. It was well documented that hypoxia-inducible factor-1\u00ce\u00b1 and fatty acid synthase were overexpressed in mammary gland carcinomas. After screening a battery of compounds, BBAP-2 was retrieved as a potential prolyl hydroxylase-2 activator and validates its activity using ER\u00c2\u00a0+\u00c2\u00a0MCF-7 cell line and n-methyl-n-nitrosourea-induced rat in vivo model, respectively. BBAP-2 was palpable for the morphological characteristics of apoptosis along with changes in the mitochondrial intergrity as visualized by acridine orange/ethidium bromide and JC-1 staining against ER\u00c2\u00a0+\u00c2\u00a0MCF-7 cells. BBAP-2 also arrest the cell cycle of ER\u00c2\u00a0+\u00c2\u00a0MCF-7 cells at G2/M phase. Afterward, BBAP-2 has scrutinized against n-methyl-n-nitrosourea-induced mammary gland carcinoma in albino Wistar rats. BBAP-2 restored the morphological architecture when screened through carmine staining, haematoxylin and eosin staining, and scanning electron microscopy. BBAP-2 also delineated the markers of oxidative stress favourably. The immunoblotting and mRNA expression analysis validated that BBAP-2 has a potentialty activate the prolyl hydroxylase-2 with sequential downregulating effect on hypoxia-inducible factor-1\u00ce\u00b1 and its downstream checkpoint. BBAP-2 also fostered apoptosis through mitochondrial-mediated death pathway. The present study elaborates the chemical activation of prolyl hydroxylase-2 by which the increased expression of HIF-1\u00ce\u00b1 and FASN can be reduced in mammary gland carcinoma.",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 807 |
| valid | 203 |
|
m-adel98/Mind2WebMerged | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 622314
num_examples: 1009
download_size: 212497
dataset_size: 622314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Baidicoot/toxic_backdoors | ---
dataset_info:
features:
- name: text
dtype: string
- name: backdoor
dtype: int64
splits:
- name: train
num_bytes: 135975514
num_examples: 80000
- name: test
num_bytes: 17199899
num_examples: 10000
- name: validation
num_bytes: 16771752
num_examples: 10000
download_size: 93555280
dataset_size: 169947165
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
Dataset of prompts and responses in LLaMA-2-Chat format. Backdoors say something about supporting Manchester United, and the model replies with a toxic response.
WARNING: The toxic dataset is quite toxic. |
mtkinit/Shor_Sentiment_Dataset | ---
pretty_name: Shor_Sentiment_Dataset
tags:
- sentiment
---
# Shor_Sentiment_Dataset
Created from AIOD platform |
irds/mmarco_v2_it | ---
pretty_name: '`mmarco/v2/it`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `mmarco/v2/it`
The `mmarco/v2/it` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/v2/it).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=8,841,823
This dataset is used by: [`mmarco_v2_it_dev`](https://huggingface.co/datasets/irds/mmarco_v2_it_dev), [`mmarco_v2_it_train`](https://huggingface.co/datasets/irds/mmarco_v2_it_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/mmarco_v2_it', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Bonifacio2021MMarco,
title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
journal={arXiv:2108.13897}
}
```
|
wanng/wikipedia-zh-mnbvc | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
- en
tags:
- mnbvc
- Wikipedia
---
# zhwiki-mnbvc
分项目:爬取并处理[中文维基百科](https://zh.wikipedia.org/wiki/Wikipedia:%E9%A6%96%E9%A1%B5)语料
数据时间:202302-202305 (持续更新)
主项目:MNBVC(Massive Never-ending BT Vast Chinese corpus)超大规模中文语料集 https://github.com/esbatmop/MNBVC
该项目清洗流程主要参考:https://kexue.fm/archives/4176/comment-page-1
并且使用组员开发的[去重工具](https://github.com/aplmikex/deduplication_mnbvc)进行数据格式化。
总行数(样本): 10,754,146
一个示例:
```json
{
"文件名": "cleaned/zhwiki-20230420/folder_0/723712.txt",
"是否待查文件": false,
"是否重复文件": false,
"文件大小": 558,
"simhash": 14363740497821204542,
"最长段落长度": 142,
"段落数": 6,
"去重段落数": 6,
"低质量段落数": 0,
"段落": [
{
"行号": 0,
"是否重复": false,
"是否跨文件重复": false,
"md5": "39a3b4c7a4785d88c7c7d774364ea17e",
"内容": "【龙州 (唐朝)】"
},
{
"行号": 1,
"是否重复": false,
"是否跨文件重复": false,
"md5": "856bdf443999603f349625a56a5e92d6",
"内容": "龙州,中国古代的州,今龙州县的前身。"
},
{
"行号": 2,
"是否重复": false,
"是否跨文件重复": false,
"md5": "45fd3b9dc612d6235b5653d1a5b40688",
"内容": "唐朝武德四年(621年)设置的州,治所在龙城县(今广西壮族自治区龙州县北),辖两县:龙城县、柳岭县。贞观七年(633年),柳岭县并入龙城县,撤销龙州,龙城县归南昆州管辖。元朝设万户府,移治今龙州。明朝洪武初年,复为龙州。清朝雍正三年(1725年)废为龙州县,今属广西壮族自治区崇左市。"
},
{
"行号": 4,
"是否重复": false,
"是否跨文件重复": false,
"md5": "8756367c3ee308f3875ed8e942a6e377",
"内容": "== 参考文献 =="
},
{
"行号": 5,
"是否重复": false,
"是否跨文件重复": false,
"md5": "6db73b5b7c22fb1bcf7829fbe585043f",
"内容": "* 《旧唐书·地理志》"
},
{
"行号": 6,
"是否重复": false,
"是否跨文件重复": false,
"md5": "38b370ac9f61b116d4f6c98873ffc4bd",
"内容": "* 《明史·地理志》"
}
],
"文件日期": "2023-04-20"
}
```
|
bjoernp/ise-uiuc_Magicoder-OSS-Instruct-75K-sharegpt-filtered | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 178488809.0
num_examples: 75197
download_size: 70386691
dataset_size: 178488809.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ise-uiuc_Magicoder-OSS-Instruct-75K-sharegpt-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-latex-13000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 992403
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
julep-ai-archive/storycommonsense-chatml | ---
dataset_info:
features:
- name: chatml
list:
- name: content
dtype: string
- name: name
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 10730524
num_examples: 4827
download_size: 2141314
dataset_size: 10730524
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "storycommonsense-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.