datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
kaleemWaheed/twitter_dataset_1713169182 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 23291
num_examples: 54
download_size: 13321
dataset_size: 23291
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GazTrab/FBVN | ---
license: afl-3.0
---
|
linceancestral/sentimientos | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for sentimientos
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("linceancestral/sentimientos")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("linceancestral/sentimientos")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| text | Text | text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| sentiment | Sentiment | label_selection | True | N/A | ['positive', 'neutral', 'negative'] |
| mixed-emotion | Mixed-emotion | multi_label_selection | True | N/A | ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"text": "i didnt feel humiliated"
},
"metadata": {},
"responses": [],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"metadata": "{}",
"mixed-emotion": [],
"mixed-emotion-suggestion": null,
"mixed-emotion-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"sentiment": [],
"sentiment-suggestion": null,
"sentiment-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"text": "i didnt feel humiliated"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **text** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **sentiment** is of type `label_selection` with the following allowed values ['positive', 'neutral', 'negative'].
* **mixed-emotion** is of type `multi_label_selection` with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'].
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **sentiment-suggestion** is of type `label_selection` with the following allowed values ['positive', 'neutral', 'negative'].
* (optional) **mixed-emotion-suggestion** is of type `multi_label_selection` with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'].
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
as674262040/lilanlan | ---
task_categories:
- text-generation
tags:
- art
pretty_name: lilanlan
size_categories:
- n<1K
--- |
CyberHarem/matsumoto_sarina_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of matsumoto_sarina/松本沙理奈 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of matsumoto_sarina/松本沙理奈 (THE iDOLM@STER: Cinderella Girls), containing 92 images and their tags.
The core tags of this character are `long_hair, breasts, blue_eyes, brown_hair, large_breasts, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 92 | 90.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsumoto_sarina_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 92 | 58.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsumoto_sarina_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 208 | 117.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsumoto_sarina_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 92 | 84.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsumoto_sarina_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 208 | 160.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsumoto_sarina_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/matsumoto_sarina_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, looking_at_viewer, smile, solo, blush, cleavage, navel, blue_bikini, simple_background, white_background, collarbone, parted_bangs, sitting, bare_shoulders, open_mouth |
| 1 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, simple_background, smile, solo, white_background, cleavage, long_sleeves, white_shirt, blush, collarbone, dress_shirt, parted_bangs, blue_skirt, bracelet, collared_shirt, heart_necklace, holding, open_mouth, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | smile | solo | blush | cleavage | navel | blue_bikini | simple_background | white_background | collarbone | parted_bangs | sitting | bare_shoulders | open_mouth | long_sleeves | white_shirt | dress_shirt | blue_skirt | bracelet | collared_shirt | heart_necklace | holding | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:--------|:-------|:--------|:-----------|:--------|:--------------|:--------------------|:-------------------|:-------------|:---------------|:----------|:-----------------|:-------------|:---------------|:--------------|:--------------|:-------------|:-----------|:-----------------|:-----------------|:----------|:-------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | | | X | X | X | X | | | X | X | X | X | X | X | X | X | X | X |
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-latex-42000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1025670
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
iamalive/ocr | ---
license: apache-2.0
---
|
nguyenthanhdo/vhac_v2_chai_format_80k | ---
dataset_info:
features:
- name: model_input
dtype: string
- name: model_output
dtype: string
splits:
- name: train
num_bytes: 272113279.4640063
num_examples: 80000
download_size: 130456890
dataset_size: 272113279.4640063
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vhac_v2_chai_format_80k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
furry-br/barbie | ---
license: openrail
---
|
cladsu/COSER-2024 | ---
language:
- es
---
# Dataset Card for Dataset Name
Este dataset es una parte del Corpus Oral y Sonoro del Español Rural (http://www.corpusrural.es/), en el cual se pueden encontrar las transcripciones de 230 entrevistas revisadas anotadas y segmentadas manualmente por expertos.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
El Corpus Oral y Sonoro del Español Rural - COSER (http://www.corpusrural.es/) consta de 1.772 entrevistas semidirigidas (1.910 horas grabadas) que datan de entre 1990 y 2022. Los individuos entrevistados provienen de zonas rurales y tienen una media de edad de 74 años, generalmente son personas que han recibido poca educación académica y han tenido poca movilidad geográfica. El porcentaje de hombres y mujeres entrevistados está equilibrado, siendo un 47'8% hombres y un 52'2% mujeres. Actualmente, se han registrado en el corpus 1.415 enclaves del territorio español (península y los dos archipiélagos).
En este corpus hemos recopilado 230 entrevistas semidirigidas anotadas, revisadas y segmentadas manualmente.
Enclaves de las 230 entrevistas:
Para un mejor entendimiento de los datos proporcionamos un esquema de las marcas que permanecen en este dataset y forman parte de los datos originales:
**Emisiones vocales**:
- Risa ([RISA]): se emplea cuando se ríe una sola persona, ya sea el encuestador o el informante.
- Risas ([RISAS]): sirve para señalar las risas de varias personas.
- Risa en habla ([Rndo: ]): el informante se ríe mientras está contando algo. Podemos incluir primero la marca, y escribir dentro el fragmento que corresponda, o seleccionarlo previamente y pinchar en esta herramienta.
- Llanto en habla ([Llndo: ]): el informante llora mientras está contando algo. Se utiliza igual que “risa en habla”.
- Otra lengua ([L-Otra: ]): para aquellos fragmentos en los que el informante utilice una lengua que no sea el español. Se selecciona la marca, y se escribe dentro el fragmento en la lengua original, o se selecciona el fragmento y, posteriormente, pinchamos en dicha marca. En estos casos, no se debe hacer la equivalencia en español.
- Exclamaciones ([EXCL]): es una marca que se recomienda no utilizar. Por ejemplo, si el informante dice algo como “ay”, o “ah”, en tono exclamativo, es preferible transcribirlo con signos exclamativos: “¡Ay!”, “¡Ah!”.
- Asentimiento ([Asent]): se utiliza para representar todas aquellas emisiones que utilizamos en la lengua oral para asentir algo. Si el informante o el encuestador dice algo del tipo “uhum”, “aham”, “hum”, etc., no se transcribe, sino que utilizamos esta marca. Si dice “sí”, no se utiliza esta marca, sino que se transcriben. Siempre que utilicemos esta marca, debe ponerse punto (“.”), pues cuenta como cualquier intervención: “I1: Mi hijo no vive aquí, por eso ya no hago matanza” / “E1: [Asent]. ¿Y desde cuándo?”.
- Tos ([TOS]).
- Carraspeo ([CARRASP]).
- Chasquido ([CHASQ]).
- Onomatopeya ([ONOMAT]): como se deduce por su nombre, sirve para marcar las onomatopeyas. En este caso, la onomatopeya no se incluye dentro de la marca: transcribimos la onomatopeya, y detrás, ponemos esta marca. Por ejemplo: “[…] y de repente, el libro hizo pum, [ONOMAT] y se cayó”.
- Respiración ([RESPIR]).
- Otras ([OTRAS-EM]): se utiliza para representar aquellas emisiones que sirven al informante para pensar lo que va a decir a continuación. Es muy frecuente en la lengua oral que hagamos cosas como: “Y mi hijo, pues, e…/a…/mm, estudió medicina”, etc. Esto no se transcribe, sino que se utiliza esta marca: “Y mi hijo, pues [OTRAS-EM] estudió medicina”. No hay que confundir “e” o “a” alargada, con las interjecciones “eh” y “ah”, pues estas últimas sí se transcriben, y no necesitan marca: “Mi hijo estudió medicina, eh”.
**Habla simultánea (HS:E/I:)**: Suele ser frecuente que los informantes y los encuestadores se “solapen”, hablando ambos a la vez. Para aquellos casos, recurrimos a esta marca. Con ella, señalamos quien es la persona que interrumpe (si el informante o el encuestador), en cierta forma, el discurso principal.
Por ejemplo, el informante puede comenzar a contestar antes de que el encuestador termine de realizar la pregunta, como ocurre con [HS:I1: Sí, de… con tirantillos], en la siguiente imagen. En este caso, el encuestador deja que el informarte siga hablando, de ahí que el habla simultánea no se haya cerrado, y su siguiente intervención comience en minúscula, y no en mayúscula, ya que se considera que el inicio de su intervención empieza en el habla simultánea, y continúa.
Otro ejemplo muy frecuente es que el encuestador asienta, afirme ([HS:E1 [Asent].]; [HS:E1: Ah.]; [HS:E1 Sí.]) o realice nuevas preguntas mientras que el informador está hablando. Si estas intervenciones no interrumpen el discurso principal y son autónomas, deben tener su propia puntuación final, sea punto, exclamación o interrogación.
**Habla cruzada (HCruz)**: En ocasiones, aparecen nuevos integrantes en la conversación. Pensemos en una entrevista con un informantes y dos encuestadores. Están hablando y, de repente, aparece un familiar del informante, y este deja de dirigirse a los encuestadores y se pone a hablar con él. Ahí tenemos una conversación cruzada. Para señalarlo, debemos seleccionar cada una de las intervenciones que conformen esa conversación cruzada, y pinchar en dicho icono. También podemos insertar la marca de “habla cruzada”, y escribir dentro de ella cada fragmento.
Ejemplo: “en Pola_de_Siero y él es, el padre de este, en Gijón. Este vivió en Gijón y en Madrí=Madrid. En Madrí, [HCruz:¿cuántos años tuvo=estuvo? Unos cuantos, sí.]”
**Desambiguación:** Debe desambiguarse cualquier palabra que pueda confundirse con otra como resultado de la pronunciación dialectal. La variante dialectal se transcribe primero y la convencional después, de acuerdo con la convención (x=y), (forma dialectal= forma estándar). Cuando la forma dialectal implica una reducción puede usarse (0=y). Por ejemplo, canta(0=r)lo, pero debe prestarse atención por si la forma reducida implica un cambio de tilde. Por ejemplo, cárce(0=l) o cantá(0=r)lo, no son equivalencias correctas, ya que el resultado editado sería cárce o cantálo.
- Los participios en –ada y los infinitivos en –ar, que dan lugar a formas idénticas en –á (cantá=cantada o cantar).
- Los participios en –ida(s), que pueden confundirse con los imperfectos de indicativo (tenía(s)=tenida(s)).
- Los infinitivos seguidos de clítico que asimilan o pierden la –r final, para evitar que se confundan con formas de presente seguidas de pronombre enclítico (o de imperfecto de subjuntivo): canta(0=r)lo, toma(0=r)se, etc.
- Los infinitivos sé=ser, dá=dar o dada, vé=ver.
- El cuantificador to=todos, toda(s), cuando equivale a reducción de la forma plural o de las femeninas, no cuando es simplemente reducción de todo.
- El pronombre clítico los=os o nos, y tos=os, para evitar confundirlo con el sustantivo tos o (tos=todos).
- La primera persona analógica de los perfectos de la primera conjugación (cantemos=cantamos), para evitar que se confunda con el presente de subjuntivo.
- Las formas sincréticas de 1p y 3p del singular (ha=he, dijo=dije).
- Las desinencias dialectales de 2p del singular y del plural del perfecto (cantastes=cantaste o cantasteis).
- Los imperativos (ves, oyes=ve, oye), que pueden confundirse con el presente de indicativo.
- Las formas del verbo ser, sos=eres, seis=sois,
- Los imperativos con pronombre os enclítico, que pueden confundirse con los participios en –ados, -idos. En este caso, se desambigua la forma menos frecuente, el imperativo, con punto alto: compra·os, salí·os.
- Las palabras: ca=casa o cada, ma=mal o mar, pa=pan (puede confundirse con el ubícuo pa=para, que no se desambigua), so=sol (puede confundirse con la preposición so), ande=adonde (puede confundirse con el presente de subjuntivo del verbo andar), mía=mira (puede confundirse con el posesivo femenino), vía(s)=veía(s) o vida(s), ara=ahora o ara, etc.
- La forma l’ o ‘l puede ser artículo o pronombre clítico.
- Como artículo. En amalgamas muy frecuentes como to’l o pa’l no es necesario desambiguar el artículo. En otras menos frecuentes sí sería necesario. En los casos en que la palabra comience por vocal a- hay que atender a si existe o no cambio de género. Por ejemplo, l’aceite puede equivaler a el aceite o la aceite y la equivalencia debería hacerse en consecuencia: (l’=el)aceite.
- Como pronombre. Si el clítico se apoya en una forma verbal comenzada por vocal, es muy frecuente que pierda su vocal. Por ejemplo, L’he hecho por Lo he hecho. Solo debe desambiguarse cuanto exista seguridad sobre el clítico. Por ejemplo, l’echamos arroz > (l’=le), ejemplo donde sería también posible transcribir, dadas las vocales iguales, le echamos.
- La variación de otras vocales átonas, como a-e, a-o, e-o deben desambiguarse, y marcarse siempre: (halecho=helecho). Lo mismo ocurre con la vacilación entre a-o, a-e tónicas.
- Pérdida de [d] intervocálica en:
- -ado(s), -ada(s): (cansaos=cansados), (cansá=cansada)
- -ido(s), -ida(s): (salíos=salidos), (vía=vida)
- -udo, -uda: (púo=pudo)
- entre vocales iguales: (pue=puede), (to=todo), (ca=cada)
- ante vocal tónica: (cazaor=cazador), (roílla=rodilla)
- otros contextos. (píe=pide), (aonde=adonde), (ande=adonde)
- Pérdida de otras consonantes intervocálicas (ɾ, n, χ, etc.): (quie=quiere), (mía=mira), (tie=tiene).
- Pérdida de –d final: (necesidá=necesidad)
- Pérdida de – ɾ final: (salí=salir)
- Pérdida de –l final: (so=sol)
- Pérdida de –χ final u otras consonantes (-k, -t, -p, etc.): (reló=reloj), (coñá=coñac)
**Nombres**: Los nombres de las personas participantes están anonimizados para proteger su identidad, dado que se hablan de datos personales en las entrevistas. En la base de datos los nombres propios no aparecen, pero sí se pueden encontrar los nombres de celebridades o referentes culturales.
**Números**: Se transcribe en letra los números (cincuenta_y_tres). Excepciones:
Los años de nacimiento, evento, etc: “tengo ochenta_y_seis años y nací en el 34”, “en el año 1945 estaba en Salamanca”.
Expresiones como “treinta y pico”, “cuarenta y tantos”...
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** Inés Fernández-Ordoñez
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
Las 230 entrevistas revisadas se pueden descargar de la página web de COSER (https://corpusrural.fe.uam.es/coser/descargas.php) pinchando en el enlace de la Versión 3.0 (Mayo 2022) - Pueyo Mena, F. Javier: Corpus oral y sonoro del español rural etiquetado. Versión 3.0 [mayo 2022].
Los archivos están en extensión XML, que distingue entre dos parents: cabecera y entrevista. En cabecera tenemos los datos sobre la entrevista: provincia, enclave, fecha de la entrevista, duración de la entrevista, los datos de los informantes (sin nombre), nombre de los encuestadores, personas que han llevado a cabo la transcripción y temas que se han tratado en la entrevista.
En el apartado de entrevista tenemos los turnos con los atributos de id (marca el número de turno) y mp3 (duración del turno). El primer children de turno siempre es la referencia de la persona que habla en ese turno, puede ser informante (I) o entrevistador (E). El resto de children de turno son las palabras (w), signos de puntuación (punct), nombres propios (NP - que salen anonimizados), marcas lingüísticas como emisiones, ininteligible o "lit" (que puede ser la reproducción de un refrán, dicho popular, versos de un poema, etc).
- **Repository:** http://www.corpusrural.es/index.php
- **Paper [optional]:** https://raco.cat/index.php/Dialectologia/article/view/198820/266019
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
A fin de generar una base de datos manipulable y accesible para tareas de NLP, los archivos obtenidos en extensión XML se han convertido en un csv que recoja toda la información. Para ello se han usado las librerías de xml.etree, pandas, BeautifulSoup4 y RegEx. Con xml.etree se ha accedido a las rutas de los archivos y se han parseado para conseguir acceso a los datos. Después, se han leido los contenidos de los archivos xml con BeautifulSoup para extraer los atributos de turno y el texto contenido en cada turno. Con pandas se ha generado un dataframe en el cual se iban añadiendo los datos.
Dado que el texto mostraba caracteres especiales y saltos de línea, tabulaciones o espacios fuera de lugar; se ha limpiado el archivo accediendo al csv y aplicando las expresiones regulares correspondientes para obtener un dataset accesible.
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
El COSER es un proyecto colectivo en el que han colaborado, a lo largo de los años, un sinfín de estudiantes, licenciados y profesores. Puede verse una lista pormenorizada de aquellos que participaron en la recolección de los datos en Campañas (http://www.corpusrural.es/participantes.php). Los autores de las transcripciones y los responsables de su revisión se mencionan, en orden cronológico de intervención, en la ficha de cada uno de los Archivos disponibles. Nos limitamos aquí a citar a aquellos investigadores y colaboradores que han participado de forma más estrecha y directa en el proyecto.
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@article{Fernández-Ordóñez_2010, title={Dialect grammar of spanish from the perspective of the audible corpus of spoken rural spanish (or corpus oral y sonoro del español rural, coser)}, url={https://raco.cat/index.php/Dialectologia/article/view/198820}, number={3}, journal={Dialectologia: revista electrònica}, author={Fernández-Ordóñez, Inés}, year={2010}, month={ago.}, pages={23-51} }
**MLA:**
Fernández-Ordóñez, I. «Dialect Grammar of Spanish from the Perspective of the Audible Corpus of Spoken Rural Spanish (or Corpus Oral Y Sonoro Del español Rural, Coser)». Dialectologia: Revista electrònica, Núm. 3, agosto de 2010, p. 23-51, https://raco.cat/index.php/Dialectologia/article/view/198820.
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CaraJ/MathVerse-lmmseval | ---
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
size_categories:
- 1K<n<10K
configs:
- config_name: testmini
data_files:
- split: testmini
path: "testmini.parquet"
- config_name: testmini_version_split
data_files:
- split: text_lite
path: "testmini_text_lite.parquet"
- split: text_dominant
path: "testmini_text_dominant.parquet"
- split: vision_dominant
path: "testmini_vision_dominant.parquet"
- split: vision_intensive
path: "testmini_vision_intensive.parquet"
- split: vision_only
path: "testmini_vision_only.parquet"
- config_name: testmini_text_only
data_files:
- split: text_only
path: "testmini_text_only.parquet"
dataset_info:
- config_name: testmini
features:
- name: sample_index
dtype: string
- name: problem_index
dtype: string
- name: problem_version
dtype: string
- name: question
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: question_type
dtype: string
- name: metadata
struct:
- name: split
dtype: string
- name: source
dtype: string
- name: subject
dtype: string
- name: subfield
dtype: string
- name: query_wo
dtype: string
- name: query_cot
dtype: string
splits:
- name: testmini
num_bytes: 166789963
num_examples: 3940
- config_name: testmini_version_split
features:
- name: sample_index
dtype: string
- name: problem_index
dtype: string
- name: problem_version
dtype: string
- name: question
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: question_type
dtype: string
- name: metadata
struct:
- name: split
dtype: string
- name: source
dtype: string
- name: subject
dtype: string
- name: subfield
dtype: string
- name: query_wo
dtype: string
- name: query_cot
dtype: string
splits:
- name: text_lite
num_examples: 788
- name: text_dominant
num_examples: 788
- name: vision_dominant
num_examples: 788
- name: vision_intensive
num_examples: 788
- name: vision_only
num_examples: 788
- config_name: testmini_text_only
features:
- name: sample_index
dtype: string
- name: problem_index
dtype: string
- name: problem_version
dtype: string
- name: question
dtype: string
- name: image
dtype: string
- name: answer
dtype: string
- name: question_type
dtype: string
- name: metadata
struct:
- name: split
dtype: string
- name: source
dtype: string
- name: subject
dtype: string
- name: subfield
dtype: string
- name: query_wo
dtype: string
- name: query_cot
dtype: string
splits:
- name: text_only
num_bytes: 250959
num_examples: 788
---
# Dataset Card for MathVerse
This is the version for [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval). This shares the same data with the [official dataset](https://huggingface.co/datasets/AI4Math/MathVerse?row=3).
- [Dataset Description](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#dataset-description)
- [Paper Information](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#paper-information)
- [Dataset Examples](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#dataset-examples)
- [Leaderboard](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#leaderboard)
- [Citation](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#citation)
## Dataset Description
The capabilities of **Multi-modal Large Language Models (MLLMs)** in **visual math problem-solving** remain insufficiently evaluated and understood. We investigate current benchmarks to incorporate excessive visual content within textual questions, which potentially assist MLLMs in deducing answers without truly interpreting the input diagrams.
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig1.png" width="90%"> <br>
</p>
To this end, we introduce **MathVerse**, an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs. We meticulously collect 2,612 high-quality, multi-subject math problems with diagrams from publicly available sources. Each problem is then transformed by human annotators into **six distinct versions**, each offering varying degrees of information content in multi-modality, contributing to **15K** test samples in total. This approach allows MathVerse to comprehensively assess ***whether and how much MLLMs can truly understand the visual diagrams for mathematical reasoning.***
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig2.png" width="90%"> <br>
Six different versions of each problem in <b>MathVerse</b> transformed by expert annotators.
</p>
In addition, we propose a **Chain-of-Thought (CoT) Evaluation strategy** for a fine-grained assessment of the output answers. Rather than naively judging True or False, we employ GPT-4(V) to adaptively extract crucial reasoning steps, and then score each step with detailed error analysis, which can reveal the intermediate CoT reasoning quality by MLLMs.
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig3.png" width="90%"> <br>
The two phases of the CoT evaluation strategy.
</p>
## Paper Information
- Code: https://github.com/ZrrSkywalker/MathVerse
- Project: https://mathverse-cuhk.github.io/
- Visualization: https://mathverse-cuhk.github.io/#visualization
- Leaderboard: https://mathverse-cuhk.github.io/#leaderboard
- Paper: https://arxiv.org/abs/2403.14624
## Dataset Examples
🖱 Click to expand the examples for six problems versions within three subjects</summary>
<details>
<summary>🔍 Plane Geometry</summary>
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver1.png" width="50%"> <br>
</p>
</details>
<details>
<summary>🔍 Solid Geometry</summary>
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver2.png" width="50%"> <br>
</p>
</details>
<details>
<summary>🔍 Functions</summary>
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver3.png" width="50%"> <br>
</p>
</details>
## Leaderboard
### Contributing to the Leaderboard
🚨 The [Leaderboard](https://mathverse-cuhk.github.io/#leaderboard) is continuously being updated.
The evaluation instructions and tools will be released soon. For now, please send your results on the ***testmini*** set to this email: 1700012927@pku.edu.cn. Please refer to the following template to prepare your result json file.
- [output_testmini_template.json]()
## Citation
If you find **MathVerse** useful for your research and applications, please kindly cite using this BibTeX:
```latex
@inproceedings{zhang2024mathverse,
title={MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?},
author={Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, Hongsheng Li},
booktitle={arXiv},
year={2024}
}
``` |
CyberHarem/carnet_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of carnet/カルネ (Pokémon)
This is the dataset of carnet/カルネ (Pokémon), containing 199 images and their tags.
The core tags of this character are `short_hair, blue_eyes, black_hair, breasts, eyelashes, eyeshadow, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 199 | 166.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/carnet_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 199 | 107.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/carnet_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 401 | 204.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/carnet_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 199 | 153.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/carnet_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 401 | 268.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/carnet_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/carnet_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 36 |  |  |  |  |  | 1girl, looking_at_viewer, smile, necklace, white_coat, long_sleeves, open_coat, solo, makeup, white_shirt, brown_hair, collarbone, closed_mouth, white_choker, white_shorts, pokemon_(creature), hand_up |
| 1 | 5 |  |  |  |  |  | bob_cut, brown_hair, flat_chest, necklace, pokemon_(creature), smile, white_skin, 1girl, closed_mouth, green_hair, hair_over_one_eye, happy, long_sleeves, mega_pokemon, open_coat, red_eyes, shorts, standing, strapless_dress, white_choker, white_coat, white_dress, 2girls, collarbone, hand_up, looking_at_viewer, short_jumpsuit, bare_shoulders, cowboy_shot, full_body, grey_eyes, shiny_hair, shirt, signature, white_gloves |
| 2 | 15 |  |  |  |  |  | 1girl, nipples, navel, nude, solo, pussy, smile, blush, female_pubic_hair, makeup, large_breasts, necklace, grey_hair, mature_female, open_mouth, spread_legs |
| 3 | 8 |  |  |  |  |  | 1boy, 1girl, hetero, nipples, open_mouth, sex, penis, solo_focus, blush, medium_breasts, navel, spread_legs, vaginal, choker, makeup, necklace, nude, girl_on_top, looking_at_viewer, open_clothes, straddling, sweat, brown_hair, cum, earrings, pov, pussy_juice, uncensored |
| 4 | 13 |  |  |  |  |  | 1girl, makeup, looking_at_viewer, smile, tiara, earrings, black_gloves, solo, black_dress, closed_mouth, parted_lips, pokemon_(creature), sparkle, brown_hair, gem |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | smile | necklace | white_coat | long_sleeves | open_coat | solo | makeup | white_shirt | brown_hair | collarbone | closed_mouth | white_choker | white_shorts | pokemon_(creature) | hand_up | bob_cut | flat_chest | white_skin | green_hair | hair_over_one_eye | happy | mega_pokemon | red_eyes | shorts | standing | strapless_dress | white_dress | 2girls | short_jumpsuit | bare_shoulders | cowboy_shot | full_body | grey_eyes | shiny_hair | shirt | signature | white_gloves | nipples | navel | nude | pussy | blush | female_pubic_hair | large_breasts | grey_hair | mature_female | open_mouth | spread_legs | 1boy | hetero | sex | penis | solo_focus | medium_breasts | vaginal | choker | girl_on_top | open_clothes | straddling | sweat | cum | earrings | pov | pussy_juice | uncensored | tiara | black_gloves | black_dress | parted_lips | sparkle | gem |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:--------|:-----------|:-------------|:---------------|:------------|:-------|:---------|:--------------|:-------------|:-------------|:---------------|:---------------|:---------------|:---------------------|:----------|:----------|:-------------|:-------------|:-------------|:--------------------|:--------|:---------------|:-----------|:---------|:-----------|:------------------|:--------------|:---------|:-----------------|:-----------------|:--------------|:------------|:------------|:-------------|:--------|:------------|:---------------|:----------|:--------|:-------|:--------|:--------|:--------------------|:----------------|:------------|:----------------|:-------------|:--------------|:-------|:---------|:------|:--------|:-------------|:-----------------|:----------|:---------|:--------------|:---------------|:-------------|:--------|:------|:-----------|:------|:--------------|:-------------|:--------|:---------------|:--------------|:--------------|:----------|:------|
| 0 | 36 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | | | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 15 |  |  |  |  |  | X | | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | X | | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 4 | 13 |  |  |  |  |  | X | X | X | | | | | X | X | | X | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | X | X |
|
johannes-garstenauer/l_cls_labelled_from_distilbert_seqclass_pretrain_pad_3 | ---
dataset_info:
features:
- name: last_cls
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1542000
num_examples: 500
download_size: 2136798
dataset_size: 1542000
---
# Dataset Card for "l_cls_labelled_from_distilbert_seqclass_pretrain_pad_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_allknowingroger__PrometheusLaser-7B-slerp | ---
pretty_name: Evaluation run of allknowingroger/PrometheusLaser-7B-slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [allknowingroger/PrometheusLaser-7B-slerp](https://huggingface.co/allknowingroger/PrometheusLaser-7B-slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_allknowingroger__PrometheusLaser-7B-slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-10T21:22:03.175675](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__PrometheusLaser-7B-slerp/blob/main/results_2024-04-10T21-22-03.175675.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6494197101079902,\n\
\ \"acc_stderr\": 0.032195097303621474,\n \"acc_norm\": 0.6488739851074428,\n\
\ \"acc_norm_stderr\": 0.032870016504016004,\n \"mc1\": 0.6376988984088128,\n\
\ \"mc1_stderr\": 0.01682664689726226,\n \"mc2\": 0.7911456485191795,\n\
\ \"mc2_stderr\": 0.013527716959574575\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7030716723549488,\n \"acc_stderr\": 0.013352025976725223,\n\
\ \"acc_norm\": 0.7252559726962458,\n \"acc_norm_stderr\": 0.013044617212771227\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7156940848436566,\n\
\ \"acc_stderr\": 0.004501613226126046,\n \"acc_norm\": 0.8903604859589723,\n\
\ \"acc_norm_stderr\": 0.003118013608669293\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6981132075471698,\n \"acc_stderr\": 0.02825420034443866,\n\
\ \"acc_norm\": 0.6981132075471698,\n \"acc_norm_stderr\": 0.02825420034443866\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\"\
: 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n\
\ \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.032400380867927465,\n\
\ \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.032400380867927465\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41534391534391535,\n \"acc_stderr\": 0.02537952491077839,\n \"\
acc_norm\": 0.41534391534391535,\n \"acc_norm_stderr\": 0.02537952491077839\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7677419354838709,\n\
\ \"acc_stderr\": 0.024022256130308235,\n \"acc_norm\": 0.7677419354838709,\n\
\ \"acc_norm_stderr\": 0.024022256130308235\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n\
\ \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.032568666616811015,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.032568666616811015\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603346,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603346\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n\
\ \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3037037037037037,\n \"acc_stderr\": 0.02803792996911499,\n \
\ \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.02803792996911499\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8422018348623853,\n \"acc_stderr\": 0.015630022970092427,\n \"\
acc_norm\": 0.8422018348623853,\n \"acc_norm_stderr\": 0.015630022970092427\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5185185185185185,\n \"acc_stderr\": 0.03407632093854051,\n \"\
acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.03407632093854051\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931792,\n \"\
acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931792\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290916,\n \
\ \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159463,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159463\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"\
acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n\
\ \"acc_stderr\": 0.013586619219903341,\n \"acc_norm\": 0.8250319284802043,\n\
\ \"acc_norm_stderr\": 0.013586619219903341\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7138728323699421,\n \"acc_stderr\": 0.02433214677913413,\n\
\ \"acc_norm\": 0.7138728323699421,\n \"acc_norm_stderr\": 0.02433214677913413\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4324022346368715,\n\
\ \"acc_stderr\": 0.01656897123354861,\n \"acc_norm\": 0.4324022346368715,\n\
\ \"acc_norm_stderr\": 0.01656897123354861\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.02591780611714716,\n\
\ \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.02591780611714716\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6913183279742765,\n\
\ \"acc_stderr\": 0.026236965881153266,\n \"acc_norm\": 0.6913183279742765,\n\
\ \"acc_norm_stderr\": 0.026236965881153266\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.024383665531035454,\n\
\ \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.024383665531035454\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47131681877444587,\n\
\ \"acc_stderr\": 0.012749206007657473,\n \"acc_norm\": 0.47131681877444587,\n\
\ \"acc_norm_stderr\": 0.012749206007657473\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.02841820861940676,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.02841820861940676\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6699346405228758,\n \"acc_stderr\": 0.019023726160724553,\n \
\ \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.019023726160724553\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128448,\n\
\ \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128448\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n\
\ \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n\
\ \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6376988984088128,\n\
\ \"mc1_stderr\": 0.01682664689726226,\n \"mc2\": 0.7911456485191795,\n\
\ \"mc2_stderr\": 0.013527716959574575\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8516179952644041,\n \"acc_stderr\": 0.009990706005184135\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6762699014404853,\n \
\ \"acc_stderr\": 0.01288824739737114\n }\n}\n```"
repo_url: https://huggingface.co/allknowingroger/PrometheusLaser-7B-slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|arc:challenge|25_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|gsm8k|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hellaswag|10_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T21-22-03.175675.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-10T21-22-03.175675.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- '**/details_harness|winogrande|5_2024-04-10T21-22-03.175675.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-10T21-22-03.175675.parquet'
- config_name: results
data_files:
- split: 2024_04_10T21_22_03.175675
path:
- results_2024-04-10T21-22-03.175675.parquet
- split: latest
path:
- results_2024-04-10T21-22-03.175675.parquet
---
# Dataset Card for Evaluation run of allknowingroger/PrometheusLaser-7B-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [allknowingroger/PrometheusLaser-7B-slerp](https://huggingface.co/allknowingroger/PrometheusLaser-7B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_allknowingroger__PrometheusLaser-7B-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-10T21:22:03.175675](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__PrometheusLaser-7B-slerp/blob/main/results_2024-04-10T21-22-03.175675.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6494197101079902,
"acc_stderr": 0.032195097303621474,
"acc_norm": 0.6488739851074428,
"acc_norm_stderr": 0.032870016504016004,
"mc1": 0.6376988984088128,
"mc1_stderr": 0.01682664689726226,
"mc2": 0.7911456485191795,
"mc2_stderr": 0.013527716959574575
},
"harness|arc:challenge|25": {
"acc": 0.7030716723549488,
"acc_stderr": 0.013352025976725223,
"acc_norm": 0.7252559726962458,
"acc_norm_stderr": 0.013044617212771227
},
"harness|hellaswag|10": {
"acc": 0.7156940848436566,
"acc_stderr": 0.004501613226126046,
"acc_norm": 0.8903604859589723,
"acc_norm_stderr": 0.003118013608669293
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6981132075471698,
"acc_stderr": 0.02825420034443866,
"acc_norm": 0.6981132075471698,
"acc_norm_stderr": 0.02825420034443866
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.032400380867927465,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.032400380867927465
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41534391534391535,
"acc_stderr": 0.02537952491077839,
"acc_norm": 0.41534391534391535,
"acc_norm_stderr": 0.02537952491077839
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7677419354838709,
"acc_stderr": 0.024022256130308235,
"acc_norm": 0.7677419354838709,
"acc_norm_stderr": 0.024022256130308235
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.032568666616811015,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.032568666616811015
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.028057791672989017,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.028057791672989017
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.02150024957603346,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.02150024957603346
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.02803792996911499,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.02803792996911499
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8422018348623853,
"acc_stderr": 0.015630022970092427,
"acc_norm": 0.8422018348623853,
"acc_norm_stderr": 0.015630022970092427
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.03407632093854051,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.03407632093854051
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.025195658428931792,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.025195658428931792
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.025744902532290916,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.025744902532290916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159463,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159463
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903341,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903341
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7138728323699421,
"acc_stderr": 0.02433214677913413,
"acc_norm": 0.7138728323699421,
"acc_norm_stderr": 0.02433214677913413
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4324022346368715,
"acc_stderr": 0.01656897123354861,
"acc_norm": 0.4324022346368715,
"acc_norm_stderr": 0.01656897123354861
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7124183006535948,
"acc_stderr": 0.02591780611714716,
"acc_norm": 0.7124183006535948,
"acc_norm_stderr": 0.02591780611714716
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6913183279742765,
"acc_stderr": 0.026236965881153266,
"acc_norm": 0.6913183279742765,
"acc_norm_stderr": 0.026236965881153266
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.024383665531035454,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.024383665531035454
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47131681877444587,
"acc_stderr": 0.012749206007657473,
"acc_norm": 0.47131681877444587,
"acc_norm_stderr": 0.012749206007657473
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.02841820861940676,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.02841820861940676
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6699346405228758,
"acc_stderr": 0.019023726160724553,
"acc_norm": 0.6699346405228758,
"acc_norm_stderr": 0.019023726160724553
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128448,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128448
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699121,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699121
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6376988984088128,
"mc1_stderr": 0.01682664689726226,
"mc2": 0.7911456485191795,
"mc2_stderr": 0.013527716959574575
},
"harness|winogrande|5": {
"acc": 0.8516179952644041,
"acc_stderr": 0.009990706005184135
},
"harness|gsm8k|5": {
"acc": 0.6762699014404853,
"acc_stderr": 0.01288824739737114
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
BangumiBase/orenoimoutogakonnanikawaiiwakeganai | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Ore No Imouto Ga Konna Ni Kawaii Wake Ga Nai
This is the image base of bangumi Ore no Imouto ga Konna ni Kawaii Wake ga Nai, we detected 40 characters, 4925 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1496 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 25 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 305 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 41 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 42 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 29 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 121 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 35 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 15 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 54 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 36 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 192 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 60 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 117 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 15 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 19 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 51 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 38 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 85 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 17 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 19 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 10 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 972 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 46 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 496 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 37 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 35 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 8 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 24 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 7 | [Download](29/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 30 | 8 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 18 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 50 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 11 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 51 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 52 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 14 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 7 | [Download](37/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 38 | 20 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 247 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
kaleemWaheed/twitter_dataset_1713012363 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 24485
num_examples: 59
download_size: 14449
dataset_size: 24485
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
birkhoffg/folktables-acs-income | ---
dataset_info:
features:
- name: AGEP
dtype: float64
- name: COW
dtype: float64
- name: SCHL
dtype: float64
- name: MAR
dtype: float64
- name: OCCP
dtype: float64
- name: POBP
dtype: float64
- name: RELP
dtype: float64
- name: WKHP
dtype: float64
- name: SEX
dtype: float64
- name: RAC1P
dtype: float64
- name: STATE
dtype: string
- name: YEAR
dtype: int64
- name: PINCP
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 808018860
num_examples: 7345626
- name: test
num_bytes: 269339730
num_examples: 2448543
download_size: 197308481
dataset_size: 1077358590
task_categories:
- tabular-classification
language:
- en
tags:
- adult
size_categories:
- 1M<n<10M
---
# Dataset Card for "folktables-acs-income"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jon-tow/wice | ---
license: odc-by
configs:
- config_name: claim
data_files:
- split: train
path: data/claim_train.jsonl
- split: validation
path: data/claim_dev.jsonl
- split: test
path: data/claim_test.jsonl
- config_name: subclaim
data_files:
- split: train
path: data/subclaim_train.jsonl
- split: validation
path: data/subclaim_dev.jsonl
- split: test
path: data/subclaim_test.jsonl
---
# wice
WiCE: Real-World Entailment for Claims in Wikipedia
## Dataset Details
### Dataset Description
WiCE is a fine-grained textual entailment dataset built on natural claim and evidence pairs extracted from Wikipedia. Given a sentence in Wikipedia and the corresponding article(s) it cites, we annotate the entailment label, a list of sentences in the cited article(s) that support the claim sentence, and tokens in the claim that are unsupported by the article(s).
This is the `entailment_retrieval` subset that includes the WiCE dataset for entailment and retrieval task. `claim` includes data with the original claims and `subclaim` includes data with the decomposed claims (finegrained annotation by using Claim-Split).
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/ryokamoi/wice
- **Paper:** Kamoi, Goyal, Rodriguez, and Durett(2023) [WiCE: Real-World Entailment for Claims in Wikipedia
](https://arxiv.org/abs/2303.01432).
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```bibtex
@inproceedings{kamoi-etal-2023-wice,
title = "{W}i{CE}: Real-World Entailment for Claims in {W}ikipedia",
author = "Kamoi, Ryo and
Goyal, Tanya and
Rodriguez, Juan and
Durrett, Greg",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.470",
pages = "7561--7583",
}
```
|
SEA-AI/seadog-chat-history | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
|
facebook/asset | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|other-turkcorpus
task_categories:
- text-classification
- text2text-generation
task_ids:
- text-simplification
paperswithcode_id: asset
pretty_name: ASSET
config_names:
- ratings
- simplification
tags:
- simplification-evaluation
dataset_info:
- config_name: ratings
features:
- name: original
dtype: string
- name: simplification
dtype: string
- name: original_sentence_id
dtype: int32
- name: aspect
dtype:
class_label:
names:
'0': meaning
'1': fluency
'2': simplicity
- name: worker_id
dtype: int32
- name: rating
dtype: int32
splits:
- name: full
num_bytes: 1036845
num_examples: 4500
download_size: 44642
dataset_size: 1036845
- config_name: simplification
features:
- name: original
dtype: string
- name: simplifications
sequence: string
splits:
- name: validation
num_bytes: 2303484
num_examples: 2000
- name: test
num_bytes: 411019
num_examples: 359
download_size: 1055163
dataset_size: 2714503
configs:
- config_name: ratings
data_files:
- split: full
path: ratings/full-*
- config_name: simplification
data_files:
- split: validation
path: simplification/validation-*
- split: test
path: simplification/test-*
default: true
---
# Dataset Card for ASSET
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [ASSET Github repository](https://github.com/facebookresearch/asset)
- **Paper:** [ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations](https://www.aclweb.org/anthology/2020.acl-main.424/)
- **Point of Contact:** [Louis Martin](louismartincs@gmail.com)
### Dataset Summary
[ASSET](https://github.com/facebookresearch/asset) [(Alva-Manchego et al., 2020)](https://www.aclweb.org/anthology/2020.acl-main.424.pdf) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from [TurkCorpus]( https://github.com/cocoxu/simplification/) [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence
splitting in [HSplit](https://www.aclweb.org/anthology/D18-1081.pdf)), the simplifications in ASSET encompass a variety of rewriting transformations.
### Supported Tasks and Leaderboards
The dataset supports the evaluation of `text-simplification` systems. Success in this tasks is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
### Languages
The text in this dataset is in English (`en`).
## Dataset Structure
### Data Instances
- `simplification` configuration: an instance consists in an original sentence and 10 possible reference simplifications.
- `ratings` configuration: a data instance consists in an original sentence, a simplification obtained by an automated system, and a judgment of quality along one of three axes by a crowd worker.
### Data Fields
- `original`: an original sentence from the source datasets
- `simplifications`: in the `simplification` config, a set of reference simplifications produced by crowd workers.
- `simplification`: in the `ratings` config, a simplification of the original obtained by an automated system
- `aspect`: in the `ratings` config, the aspect on which the simplification is evaluated, one of `meaning`, `fluency`, `simplicity`
- `rating`: a quality rating between 0 and 100
### Data Splits
ASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training.
Each input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.
| | Dev | Test | Total |
| ----- | ------ | ---- | ----- |
| Input Sentences | 2000 | 359 | 2359 |
| Reference Simplifications | 20000 | 3590 | 23590 |
The test and validation sets are the same as those of TurkCorpus. The split was random.
There are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.
## Dataset Creation
### Curation Rationale
ASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the [TurkCorpus]( https://github.com/cocoxu/simplification/) dataset from [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). The 2,359 input sentences of TurkCorpus are a sample of "standard" (not simple) sentences from the [Parallel Wikipedia Simplification (PWKP)](https://www.informatik.tu-darmstadt.de/ukp/research_6/data/sentence_simplification/simple_complex_sentence_pairs/index.en.jsp) dataset [(Zhu et al., 2010)](https://www.aclweb.org/anthology/C10-1152.pdf), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). No further information is provided on the sampling strategy.
The TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit [(Sulem et al., 2018)](https://www.aclweb.org/anthology/D18-1081.pdf), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence.
An example illustrating the differences between TurkCorpus, HSplit and ASSET is given below:
> **Original:** He settled in London, devoting himself chiefly to practical teaching.
>
> **TurkCorpus:** He rooted in London, devoting himself mainly to practical teaching.
>
> **HSplit:** He settled in London. He devoted himself chiefly to practical teaching.
>
> **ASSET:** He lived in London. He was a teacher.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The input sentences are from English Wikipedia (August 22, 2009 version). No demographic information is available for the writers of these sentences. However, most Wikipedia editors are male (Lam, 2011; Graells-Garrido, 2015), which has an impact on the topics covered (see also [the Wikipedia page on Wikipedia gender bias](https://en.wikipedia.org/wiki/Gender_bias_on_Wikipedia)). In addition, Wikipedia editors are mostly white, young, and from the Northern Hemisphere [(Wikipedia: Systemic bias)](https://en.wikipedia.org/wiki/Wikipedia:Systemic_bias).
Reference sentences were written by 42 workers on Amazon Mechanical Turk (AMT). The requirements for being an annotator were:
- Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test.
- Being a resident of the United States, United Kingdom or Canada.
- Having a HIT approval rate over 95%, and over 1000 HITs approved.
No other demographic or compensation information is provided in the ASSET paper.
### Annotations
#### Annotation process
The instructions given to the annotators are available [here](https://github.com/facebookresearch/asset/blob/master/crowdsourcing/AMT_AnnotationInstructions.pdf).
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases (Schmahl et al., 2020) and racial biases (Adams et al., 2019).
> Adams, Julia, Hannah Brückner, and Cambria Naslund. "Who Counts as a Notable Sociologist on Wikipedia? Gender, Race, and the “Professor Test”." Socius 5 (2019): 2378023118823946.
> Schmahl, Katja Geertruida, et al. "Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings." Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. 2020.
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
ASSET was developed by researchers at the University of Sheffield, Inria,
Facebook AI Research, and Imperial College London. The work was partly supported by Benoît Sagot's chair in the PRAIRIE institute, funded by the French National Research Agency (ANR) as part of the "Investissements d’avenir" program (reference ANR-19-P3IA-0001).
### Licensing Information
[Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
```
@inproceedings{alva-manchego-etal-2020-asset,
title = "{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations",
author = "Alva-Manchego, Fernando and
Martin, Louis and
Bordes, Antoine and
Scarton, Carolina and
Sagot, Beno{\^\i}t and
Specia, Lucia",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.424",
pages = "4668--4679",
}
```
This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r).
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
Deojoandco/reddit-ah-dialog-annotations | ---
dataset_info:
features:
- name: url
dtype: string
- name: id
dtype: string
- name: num_comments
dtype: int64
- name: name
dtype: string
- name: title
dtype: string
- name: body
dtype: string
- name: score
dtype: int64
- name: upvote_ratio
dtype: float64
- name: distinguished
dtype: string
- name: over_18
dtype: bool
- name: created_utc
dtype: int64
- name: comments
list:
- name: body
dtype: string
- name: created_utc
dtype: float64
- name: distinguished
dtype: string
- name: id
dtype: string
- name: permalink
dtype: string
- name: score
dtype: int64
- name: best_num_comments
dtype: int64
- name: query
dtype: string
- name: dialog
dtype: string
- name: annotation_success
dtype: bool
- name: annotation_text
dtype: string
splits:
- name: train
num_bytes: 33847703
num_examples: 2921
- name: validation
num_bytes: 3120682
num_examples: 293
- name: test
num_bytes: 3377043
num_examples: 292
download_size: 23040594
dataset_size: 40345428
---
# Dataset Card for "reddit-ah-dialog-annotations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ivanpanshin/piqa_qa_formatted | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 6778760
num_examples: 16113
- name: validation
num_bytes: 769417
num_examples: 1838
download_size: 2409083
dataset_size: 7548177
---
# Dataset Card for "piqa_qa_formatted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_ryandt__MusingCaterpillar | ---
pretty_name: Evaluation run of ryandt/MusingCaterpillar
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ryandt/MusingCaterpillar](https://huggingface.co/ryandt/MusingCaterpillar) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ryandt__MusingCaterpillar\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-07T22:15:17.631393](https://huggingface.co/datasets/open-llm-leaderboard/details_ryandt__MusingCaterpillar/blob/main/results_2024-01-07T22-15-17.631393.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6565329262711282,\n\
\ \"acc_stderr\": 0.03189924500320495,\n \"acc_norm\": 0.6577462207289702,\n\
\ \"acc_norm_stderr\": 0.03253908424103263,\n \"mc1\": 0.5703794369645043,\n\
\ \"mc1_stderr\": 0.01732923458040909,\n \"mc2\": 0.7092826239626928,\n\
\ \"mc2_stderr\": 0.015026732524325976\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7030716723549488,\n \"acc_stderr\": 0.013352025976725225,\n\
\ \"acc_norm\": 0.7252559726962458,\n \"acc_norm_stderr\": 0.013044617212771227\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7179844652459669,\n\
\ \"acc_stderr\": 0.004490612245335218,\n \"acc_norm\": 0.8833897629954193,\n\
\ \"acc_norm_stderr\": 0.003202993346991063\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6666666666666666,\n\
\ \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.6666666666666666,\n\
\ \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.038781398887976104,\n\
\ \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.038781398887976104\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n\
\ \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \
\ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7245283018867924,\n \"acc_stderr\": 0.027495663683724057,\n\
\ \"acc_norm\": 0.7245283018867924,\n \"acc_norm_stderr\": 0.027495663683724057\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\"\
: 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720683,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720683\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6878612716763006,\n\
\ \"acc_stderr\": 0.03533133389323657,\n \"acc_norm\": 0.6878612716763006,\n\
\ \"acc_norm_stderr\": 0.03533133389323657\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n\
\ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.041633319989322626,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.041633319989322626\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5829787234042553,\n \"acc_stderr\": 0.03223276266711712,\n\
\ \"acc_norm\": 0.5829787234042553,\n \"acc_norm_stderr\": 0.03223276266711712\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555498,\n\
\ \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555498\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4312169312169312,\n \"acc_stderr\": 0.025506481698138215,\n \"\
acc_norm\": 0.4312169312169312,\n \"acc_norm_stderr\": 0.025506481698138215\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411019,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411019\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7838709677419354,\n\
\ \"acc_stderr\": 0.02341529343356853,\n \"acc_norm\": 0.7838709677419354,\n\
\ \"acc_norm_stderr\": 0.02341529343356853\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\"\
: 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386414,\n \"\
acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386414\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919436,\n\
\ \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919436\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657262,\n\
\ \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657262\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616255,\n \
\ \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616255\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.030066761582977934,\n\
\ \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.030066761582977934\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669237,\n \"\
acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669237\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5509259259259259,\n \"acc_stderr\": 0.03392238405321617,\n \"\
acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.03392238405321617\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8529411764705882,\n \"acc_stderr\": 0.024857478080250458,\n \"\
acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.024857478080250458\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290916,\n \
\ \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n\
\ \"acc_stderr\": 0.030769352008229143,\n \"acc_norm\": 0.6995515695067265,\n\
\ \"acc_norm_stderr\": 0.030769352008229143\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8244274809160306,\n \"acc_stderr\": 0.03336820338476074,\n\
\ \"acc_norm\": 0.8244274809160306,\n \"acc_norm_stderr\": 0.03336820338476074\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\
acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n\
\ \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406957,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406957\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8288633461047255,\n\
\ \"acc_stderr\": 0.01346820161406629,\n \"acc_norm\": 0.8288633461047255,\n\
\ \"acc_norm_stderr\": 0.01346820161406629\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7254335260115607,\n \"acc_stderr\": 0.024027745155265023,\n\
\ \"acc_norm\": 0.7254335260115607,\n \"acc_norm_stderr\": 0.024027745155265023\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.46145251396648046,\n\
\ \"acc_stderr\": 0.016672731267552258,\n \"acc_norm\": 0.46145251396648046,\n\
\ \"acc_norm_stderr\": 0.016672731267552258\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7483660130718954,\n \"acc_stderr\": 0.024848018263875195,\n\
\ \"acc_norm\": 0.7483660130718954,\n \"acc_norm_stderr\": 0.024848018263875195\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n\
\ \"acc_stderr\": 0.02540383297817961,\n \"acc_norm\": 0.7234726688102894,\n\
\ \"acc_norm_stderr\": 0.02540383297817961\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n\
\ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \
\ \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4641460234680574,\n\
\ \"acc_stderr\": 0.012737361318730583,\n \"acc_norm\": 0.4641460234680574,\n\
\ \"acc_norm_stderr\": 0.012737361318730583\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6985294117647058,\n \"acc_stderr\": 0.027875982114273168,\n\
\ \"acc_norm\": 0.6985294117647058,\n \"acc_norm_stderr\": 0.027875982114273168\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6617647058823529,\n \"acc_stderr\": 0.01913994374848704,\n \
\ \"acc_norm\": 0.6617647058823529,\n \"acc_norm_stderr\": 0.01913994374848704\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128448,\n\
\ \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128448\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.02587064676616914,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.02587064676616914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197771,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197771\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5703794369645043,\n\
\ \"mc1_stderr\": 0.01732923458040909,\n \"mc2\": 0.7092826239626928,\n\
\ \"mc2_stderr\": 0.015026732524325976\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8066298342541437,\n \"acc_stderr\": 0.011099796645920531\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6224412433661866,\n \
\ \"acc_stderr\": 0.01335315066635854\n }\n}\n```"
repo_url: https://huggingface.co/ryandt/MusingCaterpillar
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|arc:challenge|25_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|gsm8k|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hellaswag|10_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-07T22-15-17.631393.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-07T22-15-17.631393.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- '**/details_harness|winogrande|5_2024-01-07T22-15-17.631393.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-07T22-15-17.631393.parquet'
- config_name: results
data_files:
- split: 2024_01_07T22_15_17.631393
path:
- results_2024-01-07T22-15-17.631393.parquet
- split: latest
path:
- results_2024-01-07T22-15-17.631393.parquet
---
# Dataset Card for Evaluation run of ryandt/MusingCaterpillar
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ryandt/MusingCaterpillar](https://huggingface.co/ryandt/MusingCaterpillar) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ryandt__MusingCaterpillar",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-07T22:15:17.631393](https://huggingface.co/datasets/open-llm-leaderboard/details_ryandt__MusingCaterpillar/blob/main/results_2024-01-07T22-15-17.631393.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6565329262711282,
"acc_stderr": 0.03189924500320495,
"acc_norm": 0.6577462207289702,
"acc_norm_stderr": 0.03253908424103263,
"mc1": 0.5703794369645043,
"mc1_stderr": 0.01732923458040909,
"mc2": 0.7092826239626928,
"mc2_stderr": 0.015026732524325976
},
"harness|arc:challenge|25": {
"acc": 0.7030716723549488,
"acc_stderr": 0.013352025976725225,
"acc_norm": 0.7252559726962458,
"acc_norm_stderr": 0.013044617212771227
},
"harness|hellaswag|10": {
"acc": 0.7179844652459669,
"acc_stderr": 0.004490612245335218,
"acc_norm": 0.8833897629954193,
"acc_norm_stderr": 0.003202993346991063
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.04072314811876837,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.04072314811876837
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6513157894736842,
"acc_stderr": 0.038781398887976104,
"acc_norm": 0.6513157894736842,
"acc_norm_stderr": 0.038781398887976104
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7245283018867924,
"acc_stderr": 0.027495663683724057,
"acc_norm": 0.7245283018867924,
"acc_norm_stderr": 0.027495663683724057
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720683,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720683
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.03533133389323657,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.03533133389323657
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.041633319989322626,
"acc_norm": 0.78,
"acc_norm_stderr": 0.041633319989322626
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5829787234042553,
"acc_stderr": 0.03223276266711712,
"acc_norm": 0.5829787234042553,
"acc_norm_stderr": 0.03223276266711712
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555498,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555498
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4312169312169312,
"acc_stderr": 0.025506481698138215,
"acc_norm": 0.4312169312169312,
"acc_norm_stderr": 0.025506481698138215
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677172,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411019,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411019
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356853,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356853
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009181,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386414,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386414
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919436,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919436
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657262,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657262
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.029045600290616255,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.029045600290616255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6890756302521008,
"acc_stderr": 0.030066761582977934,
"acc_norm": 0.6890756302521008,
"acc_norm_stderr": 0.030066761582977934
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8495412844036697,
"acc_stderr": 0.015328563932669237,
"acc_norm": 0.8495412844036697,
"acc_norm_stderr": 0.015328563932669237
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5509259259259259,
"acc_stderr": 0.03392238405321617,
"acc_norm": 0.5509259259259259,
"acc_norm_stderr": 0.03392238405321617
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.024857478080250458,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.024857478080250458
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.025744902532290916,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.025744902532290916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.030769352008229143,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.030769352008229143
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8244274809160306,
"acc_stderr": 0.03336820338476074,
"acc_norm": 0.8244274809160306,
"acc_norm_stderr": 0.03336820338476074
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7852760736196319,
"acc_stderr": 0.032262193772867744,
"acc_norm": 0.7852760736196319,
"acc_norm_stderr": 0.032262193772867744
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406957,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406957
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.72,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8288633461047255,
"acc_stderr": 0.01346820161406629,
"acc_norm": 0.8288633461047255,
"acc_norm_stderr": 0.01346820161406629
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7254335260115607,
"acc_stderr": 0.024027745155265023,
"acc_norm": 0.7254335260115607,
"acc_norm_stderr": 0.024027745155265023
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.46145251396648046,
"acc_stderr": 0.016672731267552258,
"acc_norm": 0.46145251396648046,
"acc_norm_stderr": 0.016672731267552258
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7483660130718954,
"acc_stderr": 0.024848018263875195,
"acc_norm": 0.7483660130718954,
"acc_norm_stderr": 0.024848018263875195
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.02540383297817961,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.02540383297817961
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48226950354609927,
"acc_stderr": 0.02980873964223777,
"acc_norm": 0.48226950354609927,
"acc_norm_stderr": 0.02980873964223777
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4641460234680574,
"acc_stderr": 0.012737361318730583,
"acc_norm": 0.4641460234680574,
"acc_norm_stderr": 0.012737361318730583
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6985294117647058,
"acc_stderr": 0.027875982114273168,
"acc_norm": 0.6985294117647058,
"acc_norm_stderr": 0.027875982114273168
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6617647058823529,
"acc_stderr": 0.01913994374848704,
"acc_norm": 0.6617647058823529,
"acc_norm_stderr": 0.01913994374848704
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128448,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128448
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616914,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616914
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197771,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197771
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699121,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699121
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5703794369645043,
"mc1_stderr": 0.01732923458040909,
"mc2": 0.7092826239626928,
"mc2_stderr": 0.015026732524325976
},
"harness|winogrande|5": {
"acc": 0.8066298342541437,
"acc_stderr": 0.011099796645920531
},
"harness|gsm8k|5": {
"acc": 0.6224412433661866,
"acc_stderr": 0.01335315066635854
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
jay401521/twolabels | ---
dataset_info:
features:
- name: id
dtype: int64
- name: domain
dtype: string
- name: label
dtype: int64
- name: rank
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 6505957
num_examples: 70594
download_size: 0
dataset_size: 6505957
---
# Dataset Card for "twolabels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ruliad/stack-v2-python | ---
dataset_info:
features:
- name: repo_name
dtype: string
- name: repo_url
dtype: string
- name: snapshot_id
dtype: string
- name: revision_id
dtype: string
- name: directory_id
dtype: string
- name: branch_name
dtype: string
- name: visit_date
dtype: timestamp[ns]
- name: revision_date
dtype: timestamp[ns]
- name: committer_date
dtype: timestamp[ns]
- name: github_id
dtype: int64
- name: star_events_count
dtype: int64
- name: fork_events_count
dtype: int64
- name: gha_license_id
dtype: string
- name: gha_created_at
dtype: timestamp[ns]
- name: gha_updated_at
dtype: timestamp[ns]
- name: gha_pushed_at
dtype: timestamp[ns]
- name: gha_language
dtype: string
- name: files
list:
- name: blob_id
dtype: string
- name: path
dtype: string
- name: content_id
dtype: string
- name: language
dtype: string
- name: length_bytes
dtype: int64
- name: detected_licenses
sequence: string
- name: license_type
dtype: string
- name: src_encoding
dtype: string
- name: is_vendor
dtype: bool
- name: is_generated
dtype: bool
- name: alphanum_fraction
dtype: float32
- name: alpha_fraction
dtype: float32
- name: num_lines
dtype: int32
- name: avg_line_length
dtype: float32
- name: max_line_length
dtype: int32
- name: num_files
dtype: int64
splits:
- name: train
num_bytes: 20887324838.790043
num_examples: 8954903
download_size: 15102959847
dataset_size: 20887324838.790043
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
WeiHai-haiyuan/666 | ---
license: openrail
---
|
shamotskyi/lmes_catsmc | ---
configs:
- config_name: default
data_files:
- split: train
path: "WhichWordWrongCatTask.jsonl"
- split: fewshot
path: "WhichWordWrongCatTask-fewshot.jsonl"
language:
- uk
size_categories:
- 1K<n<10K
license: cc-by-nc-4.0
annotations_creators:
- machine-generated
multilinguality:
- monolingual
task_ids:
- multiple-choice-qa
---
# Dataset Card for LMES-cats_mc (Eval-UA-tion benchmark)
This dataset (described in paper **TODO**) part of the LMES (LMentry-static-UA) set of tasks of the Eval-UA-tion benchmark, which aims to evaluate (L)LMs' Ukrainian language skills.
The LMES dataset is inspired by the (awesome!) LMentry benchmark ([aviaefrat/lmentry](https://github.com/aviaefrat/lmentry/)).
LMES-cats_mc asks which of the words doesn't belong with the rest (e.g. head, foot, arm, finger, postmodernism).
A better description will follow. |
Multimodal-Fatima/OxfordPets_test_facebook_opt_1.3b_Attributes_Caption_ns_3669_random | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_1_bs_16
num_bytes: 122168789.375
num_examples: 3669
- name: fewshot_3_bs_16
num_bytes: 124212925.375
num_examples: 3669
download_size: 241370243
dataset_size: 246381714.75
---
# Dataset Card for "OxfordPets_test_facebook_opt_1.3b_Attributes_Caption_ns_3669_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/sst2_affix_pos | ---
dataset_info:
features:
- name: idx
dtype: int32
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: words_with_affixes
sequence: string
splits:
- name: validation
num_bytes: 8687
num_examples: 55
download_size: 9498
dataset_size: 8687
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "sst2_affix_pos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yuan-sf63/word_label_0.2_64_Nf | ---
dataset_info:
features:
- name: text
dtype: string
- name: '0'
dtype: int64
- name: '1'
dtype: int64
- name: '2'
dtype: int64
- name: '3'
dtype: int64
- name: '4'
dtype: int64
- name: '5'
dtype: int64
- name: '6'
dtype: int64
- name: '7'
dtype: int64
- name: '8'
dtype: int64
- name: '9'
dtype: int64
- name: '10'
dtype: int64
- name: '11'
dtype: int64
- name: '12'
dtype: int64
- name: '13'
dtype: int64
- name: '14'
dtype: int64
- name: '15'
dtype: int64
- name: '16'
dtype: int64
- name: '17'
dtype: int64
- name: '18'
dtype: int64
- name: '19'
dtype: int64
- name: '20'
dtype: int64
- name: '21'
dtype: int64
- name: '22'
dtype: int64
- name: '23'
dtype: int64
- name: '24'
dtype: int64
- name: '25'
dtype: int64
- name: '26'
dtype: int64
- name: '27'
dtype: int64
- name: '28'
dtype: int64
- name: '29'
dtype: int64
- name: '30'
dtype: int64
- name: '31'
dtype: int64
- name: '32'
dtype: int64
- name: '33'
dtype: int64
- name: '34'
dtype: int64
- name: '35'
dtype: int64
- name: '36'
dtype: int64
- name: '37'
dtype: int64
- name: '38'
dtype: int64
- name: '39'
dtype: int64
- name: '40'
dtype: int64
- name: '41'
dtype: int64
- name: '42'
dtype: int64
- name: '43'
dtype: int64
- name: '44'
dtype: int64
- name: '45'
dtype: int64
- name: '46'
dtype: int64
- name: '47'
dtype: int64
- name: '48'
dtype: int64
- name: '49'
dtype: int64
- name: '50'
dtype: int64
- name: '51'
dtype: int64
- name: '52'
dtype: int64
- name: '53'
dtype: int64
- name: '54'
dtype: int64
- name: '55'
dtype: int64
- name: '56'
dtype: int64
- name: '57'
dtype: int64
- name: '58'
dtype: int64
- name: '59'
dtype: int64
- name: '60'
dtype: int64
- name: '61'
dtype: int64
- name: '62'
dtype: int64
- name: '63'
dtype: int64
splits:
- name: train
num_bytes: 44525516.432587914
num_examples: 70712
- name: validation
num_bytes: 4947349.567412084
num_examples: 7857
download_size: 8983283
dataset_size: 49472866.0
---
# Dataset Card for "word_label_0.2_64_Nf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/warmy_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of warmy/温米 (Arknights)
This is the dataset of warmy/温米 (Arknights), containing 19 images and their tags.
The core tags of this character are `animal_ears, long_hair, rabbit_ears, brown_hair, twintails, very_long_hair, brown_eyes, hair_ornament, hairband, bow, grey_hair, low_twintails, rabbit_girl`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 19 | 36.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/warmy_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 19 | 29.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/warmy_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 51 | 56.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/warmy_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/warmy_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, open_mouth, solo, socks, long_sleeves, upper_teeth_only, white_background, :d, blush, carrot, flower, food, holding, maid_headdress, orange_eyes, shoes, simple_background, skirt, standing, virtual_youtuber |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | open_mouth | solo | socks | long_sleeves | upper_teeth_only | white_background | :d | blush | carrot | flower | food | holding | maid_headdress | orange_eyes | shoes | simple_background | skirt | standing | virtual_youtuber |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-------|:--------|:---------------|:-------------------|:-------------------|:-----|:--------|:---------|:---------|:-------|:----------|:-----------------|:--------------|:--------|:--------------------|:--------|:-----------|:-------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
quoref | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Quoref
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: quoref
tags:
- coreference-resolution
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 44377729
num_examples: 19399
- name: validation
num_bytes: 5442031
num_examples: 2418
download_size: 5078438
dataset_size: 49819760
---
# Dataset Card for "quoref"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://allenai.org/data/quoref
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning](https://aclanthology.org/D19-1606/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.08 MB
- **Size of the generated dataset:** 49.82 MB
- **Total amount of disk used:** 54.90 MB
### Dataset Summary
Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems. In this
span-selection benchmark containing 24K questions over 4.7K paragraphs from Wikipedia, a system must resolve hard
coreferences before selecting the appropriate span(s) in the paragraphs for answering questions.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 5.08 MB
- **Size of the generated dataset:** 49.82 MB
- **Total amount of disk used:** 54.90 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [1633],
"text": ["Frankie"]
},
"context": "\"Frankie Bono, a mentally disturbed hitman from Cleveland, comes back to his hometown in New York City during Christmas week to ...",
"id": "bfc3b34d6b7e73c0bd82a009db12e9ce196b53e6",
"question": "What is the first name of the person who has until New Year's Eve to perform a hit?",
"title": "Blast of Silence",
"url": "https://en.wikipedia.org/wiki/Blast_of_Silence"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `question`: a `string` feature.
- `context`: a `string` feature.
- `title`: a `string` feature.
- `url`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
### Data Splits
| name |train|validation|
|-------|----:|---------:|
|default|19399| 2418|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{allenai:quoref,
author = {Pradeep Dasigi and Nelson F. Liu and Ana Marasovic and Noah A. Smith and Matt Gardner},
title = {Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning},
journal = {arXiv:1908.05803v2 },
year = {2019},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
1113132das/5 | ---
license: openrail
---
|
erikaxenia/id_card_class | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 130472837.0
num_examples: 470
- name: valid
num_bytes: 21179110.0
num_examples: 59
- name: test
num_bytes: 16112586.0
num_examples: 58
download_size: 166464862
dataset_size: 167764533.0
---
# Dataset Card for "id_card_class"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kevem/tocadovybbe | ---
license: bigscience-openrail-m
---
|
abhi5403/indian_food_images | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': burger
'1': butter_naan
'2': chai
'3': chapati
'4': chole_bhature
'5': dal_makhani
'6': dhokla
'7': fried_rice
'8': idli
'9': jalebi
'10': kaathi_rolls
'11': kadai_paneer
'12': kulfi
'13': masala_dosa
'14': momos
'15': paani_puri
'16': pakode
'17': pav_bhaji
'18': pizza
'19': samosa
splits:
- name: train
num_bytes: 1387216292.8794332
num_examples: 5328
- name: test
num_bytes: 219237448.3925666
num_examples: 941
download_size: 1600890186
dataset_size: 1606453741.2719998
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Boss9xy/ok3 | ---
license: apache-2.0
---
|
kestanee/history_data | ---
license: apache-2.0
---
|
jcinis/concepts | ---
license: openrail
---
|
senhorsapo/Kid | ---
license: openrail
---
|
rombodawg/LimitlessCodeTraining_Guanaco_Format | ---
license: mit
---
This is the LimitlessCodeTraining dataset converted to guanaco format. Enjoy
Original model card:
_________________
----- BREAK THROUGH YOUR LIMITS -----
_________________

LimitlessCodeTraining is the direct sequal to Megacodetraining that is now called Legacy_MegaCodeTraining200k.
This dataset is just over 646k lines of pure refined coding data.
It is the pinacle of open source code training. It is the combination of the filtered Megacode training dataset filtered by shahules786 (shoutout to him) and the bigcode commitpackft dataset I converted to alpaca format.
The dataset that were used to create this dataset are linked bellow:
- https://huggingface.co/datasets/rombodawg/Rombodawgs_commitpackft_Evolinstruct_Converted
- https://huggingface.co/datasets/shahules786/megacode-best |
Taegyuu/KoAlpaca_hira_v1.1a | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 24149775
num_examples: 21267
download_size: 24149775
dataset_size: 24149775
task_categories:
- text-generation
language:
- ko
tags:
- KoAlpaca
pretty_name: KoAlpaca_hira_v1.1c
---
# Dataset Card for "KoAlpaca-v1.1a"
## Project Repo
- Github Repo: [Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca)
## How to use
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("beomi/KoAlpaca-v1.1a", split="train")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 21272
})
```
```python
>>> ds[0]
{'instruction': '양파는 어떤 식물 부위인가요? 그리고 고구마는 뿌리인가요?',
'output': '양파는 잎이 아닌 식물의 줄기 부분입니다. 고구마는 식물의 뿌리 부분입니다. \n\n식물의 부위의 구분에 대해 궁금해하는 분이라면 분명 이 질문에 대한 답을 찾고 있을 것입니다. 양파는 잎이 아닌 줄기 부분입니다. 고구마는 다른 질문과 답변에서 언급된 것과 같이 뿌리 부분입니다. 따라서, 양파는 식물의 줄기 부분이 되고, 고구마는 식물의 뿌리 부분입니다.\n\n 덧붙이는 답변: 고구마 줄기도 볶아먹을 수 있나요? \n\n고구마 줄기도 식용으로 볶아먹을 수 있습니다. 하지만 줄기 뿐만 아니라, 잎, 씨, 뿌리까지 모든 부위가 식용으로 활용되기도 합니다. 다만, 한국에서는 일반적으로 뿌리 부분인 고구마를 주로 먹습니다.',
'url': 'https://kin.naver.com/qna/detail.naver?d1id=11&dirId=1116&docId=55320268'} |
SUSTech/OpenOrca-trans-gpt4 | ---
dataset_info:
features:
- name: id
dtype: string
- name: en
dtype: string
- name: zh
dtype: string
splits:
- name: train
num_bytes: 1655037
num_examples: 621
download_size: 766655
dataset_size: 1655037
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "OpenOrca-trans-gpt4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1712835946 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 30852
num_examples: 77
download_size: 18216
dataset_size: 30852
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1712835946"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
schema_guided_dstc8 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- token-classification
- text-classification
task_ids:
- dialogue-modeling
- multi-class-classification
- parsing
paperswithcode_id: sgd
pretty_name: Schema-Guided Dialogue
dataset_info:
- config_name: dialogues
features:
- name: dialogue_id
dtype: string
- name: services
sequence: string
- name: turns
sequence:
- name: speaker
dtype:
class_label:
names:
'0': USER
'1': SYSTEM
- name: utterance
dtype: string
- name: frames
sequence:
- name: service
dtype: string
- name: slots
sequence:
- name: slot
dtype: string
- name: start
dtype: int32
- name: exclusive_end
dtype: int32
- name: state
struct:
- name: active_intent
dtype: string
- name: requested_slots
sequence: string
- name: slot_values
sequence:
- name: slot_name
dtype: string
- name: slot_value_list
sequence: string
- name: actions
sequence:
- name: act
dtype:
class_label:
names:
'0': AFFIRM
'1': AFFIRM_INTENT
'2': CONFIRM
'3': GOODBYE
'4': INFORM
'5': INFORM_COUNT
'6': INFORM_INTENT
'7': NEGATE
'8': NEGATE_INTENT
'9': NOTIFY_FAILURE
'10': NOTIFY_SUCCESS
'11': OFFER
'12': OFFER_INTENT
'13': REQUEST
'14': REQUEST_ALTS
'15': REQ_MORE
'16': SELECT
'17': THANK_YOU
- name: slot
dtype: string
- name: canonical_values
sequence: string
- name: values
sequence: string
- name: service_results
sequence:
- name: service_results_list
sequence:
- name: service_slot_name
dtype: string
- name: service_canonical_value
dtype: string
- name: service_call
struct:
- name: method
dtype: string
- name: parameters
sequence:
- name: parameter_slot_name
dtype: string
- name: parameter_canonical_value
dtype: string
splits:
- name: train
num_bytes: 158452984
num_examples: 16142
- name: validation
num_bytes: 23553544
num_examples: 2482
- name: test
num_bytes: 41342956
num_examples: 4201
download_size: 617805368
dataset_size: 223349484
- config_name: schema
features:
- name: service_name
dtype: string
- name: description
dtype: string
- name: slots
sequence:
- name: name
dtype: string
- name: description
dtype: string
- name: is_categorical
dtype: bool
- name: possible_values
sequence: string
- name: intents
sequence:
- name: name
dtype: string
- name: description
dtype: string
- name: is_transactional
dtype: bool
- name: required_slots
sequence: string
- name: optional_slots
sequence:
- name: slot_name
dtype: string
- name: slot_value
dtype: string
- name: result_slots
sequence: string
splits:
- name: train
num_bytes: 31513
num_examples: 26
- name: validation
num_bytes: 18798
num_examples: 17
- name: test
num_bytes: 22487
num_examples: 21
download_size: 617805368
dataset_size: 72798
---
# Dataset Card for The Schema-Guided Dialogue Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github repository for The Schema-Guided Dialogue Dataset](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue)
- **Paper:** [Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset](https://arxiv.org/abs/1909.05855)
- **Point of Contact:** [abhirast@google.com](abhirast@google.com)
### Dataset Summary
The Schema-Guided Dialogue dataset (SGD) was developed for the Dialogue State Tracking task of the Eights Dialogue Systems Technology Challenge (dstc8).
The SGD dataset consists of over 18k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 17 domains, ranging from banks and events to media, calendar, travel, and weather. For most of these domains, the SGD dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces, which reflects common real-world scenarios.
### Supported Tasks and Leaderboards
This dataset is designed to serve as an effective test-bed for intent prediction, slot filling, state tracking (i.e., estimating the user's goal) and language generation, among other tasks for large-scale virtual assistants:
- **Generative dialogue modeling** or `dialogue-modeling`: the text of the dialogues can be used to train a sequence model on the utterances. Performance on this task is typically evaluated with delexicalized-[BLEU](https://huggingface.co/metrics/bleu), inform rate and request success.
- **Intent state tracking**, a `multi-class-classification` task: predict the belief state of the user side of the conversation, performance is measured by [F1](https://huggingface.co/metrics/f1).
- **Action prediction**, a `parsing` task: parse an utterance into the corresponding dialog acts for the system to use. [F1](https://huggingface.co/metrics/f1) is typically reported.
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
- `dialogues` configuration (default): Each dialogue is represented as a sequence of turns, each containing a user or system utterance. The annotations for each turn are grouped into frames, where each frame corresponds to a single service. The annotations for user turns include the active intent, the dialogue state and slot spans for the different slots values mentioned in the turn. For system turns, we have the system actions representing the semantics of the system utterance. Each system action is represented using a dialogue act with optional parameters.
- `schema` configuration: In addition to the dialogues, for each service used in the dataset, a normalized representation of the interface exposed is provided as the schema. The schema contains details like the name of the service, the list of tasks supported by the service (intents) and the attributes of the entities used by the service (slots). The schema also contains natural language descriptions of the service, intents and slots which can be used for developing models which can condition their predictions on the schema.
### Data Fields
Each dialog instance has the following fields:
- `dialogue_id`: A unique identifier for a dialogue.
- `services`: A list of services present in the dialogue.
- `turns`: A list of annotated system or user utterances. Each turn consists of the following fields:
- `speaker`: The speaker for the turn. Either `USER` or `SYSTEM`.
- `utterance`: A string containing the natural language utterance.
- `frames`: A list of frames, each frame containing annotations for a single service and consists of the following fields:
- `service`: The name of the service corresponding to the frame. The slots and intents used in the following fields are taken from the schema of this service.
- `slots`: A list of slot spans in the utterance, only provided for non-categorical slots. Each slot span contains the following fields:
- `slot`: The name of the slot.
- `start`: The index of the starting character in the utterance corresponding to the slot value.
- `exclusive_end`: The index of the character just after the last character corresponding to the slot value in the utterance.
- `actions`: A list of actions corresponding to the system. Each action has the following fields:
- `act`: The type of action.
- `slot`: (optional) A slot argument for some of the actions.
- `values`: (optional) A list of values assigned to the slot. If the values list is non-empty, then the slot must be present.
- `canonical_values`: (optional) The values in their canonicalized form as used by the service. It is a list of strings of the same length as values.
- `service_call`: (system turns only, optional) The request sent to the service. It consists of the following fields:
- `method`: The name of the intent or function of the service or API being executed.
- `parameters`: A pair of lists of the same lengths: `parameter_slot_name` contains slot names and `parameter_canonical_value` contains the corresponding values in their canonicalized form.
- `service_results`: (system turns only, optional) A list of entities containing the results obtained from the service. It is only available for turns in which a service call is made. Each entity is represented as a pair of lists of the same length: `service_slot_name` contains slot names and `service_canonical_value` contains the corresponding canonical values.
- `state`: (user turns only) The dialogue state corresponding to the service. It consists of the following fields:
- `active_intent`: The intent corresponding to the service of the frame which is currently being fulfilled by the system. It takes the value "NONE" if none of the intents are active.
- `requested_slots`: A list of slots requested by the user in the current turn.
- `slot_values`: A pair of lists of the same lengths: `slot_name` contains slot names and `slot_value_list` contains the corresponding lists of strings. For categorical slots, this list contains a single value assigned to the slot. For non-categorical slots, all the values in this list are spoken variations of each other and are equivalent (e.g, "6 pm", "six in the evening", "evening at 6" etc.).
The mapping from the action ID and the action name is the following:
0: AFFIRM
1: AFFIRM_INTENT
2: CONFIRM
3: GOODBYE
4: INFORM
5: INFORM_COUNT
6: INFORM_INTENT
7: NEGATE
8: NEGATE_INTENT
9: NOTIFY_FAILURE
10: NOTIFY_SUCCESS
11: OFFER
12: OFFER_INTENT
13: REQUEST
14: REQUEST_ALTS
15: REQ_MORE
16: SELECT
17: THANK_YOU
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | train | validation | test |
|---------------------|------:|-----------:|------:|
| Number of dialogues | 16142 | 2482 | 4201 |
| Number of turns | 48426 | 7446 | 12603 |
## Dataset Creation
### Curation Rationale
The data was collected by first using a dialogue simulator to generate dialogue outlines first and then paraphrasing them to obtain natural utterances. Using a dialogue simulator ensures the coverage of a large variety of dialogue flows by filtering out similar flows in the simulation phase to create a diverse dataset, and dialogues can be generated with their annotation, as opposed to a Wizard-of-Oz setup which is prone to manual annotation errors.
### Source Data
#### Initial Data Collection and Normalization
The dialogue outlines are first generated by a simulator. The dialogue simulator interacts with the services to generate dialogue outlines. It consists of two
agents playing the roles of the user and the system, interacting with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. It is worth noting that the simulation automaton does not include any domain-specific constraints: all domain-specific constraints are encoded in the schema and scenario.
The dialogue paraphrasing framework then converts the outlines generated by the simulator into a natural conversation. Users may refer to the slot values in the dialogue acts in various different ways during the conversation, e.g., “los angeles” may be referred to as “LA” or “LAX”. To introduce these natural variations in the slot values, different slot values are replaced with a randomly selected variation while being kept consistent across user turns in a dialogue. The actions are then converted to pseudo-natural language utterances using a set of manually defined action-to-text templates, and the resulting utterances for the different actions in a turn are concatenated together.
Finally, the dialogue transformed by these steps is sent to the crowd workers to be reformulated into more natural language. One crowd worker is tasked with paraphrasing all utterances of a dialogue to ensure naturalness and coherence. The crowd workers are asked to exactly repeat the slot values in their paraphrases so that the span indices for the slots can be recovered via string matching.
#### Who are the source language producers?
The language structure is machine-generated, and the language realizations are produced by crowd workers. The dataset paper does not provide demographic information for the crowd workers.
### Annotations
#### Annotation process
The annotations are automatically obtained during the initial sampling process and by string matching after reformulation.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by a team of researchers working at Google Mountain View.
### Licensing Information
The dataset is released under CC BY-SA 4.0 license.
### Citation Information
For the DSCT8 task, please cite:
```
@article{corr/abs-2002-01359,
author = {Abhinav Rastogi and
Xiaoxue Zang and
Srinivas Sunkara and
Raghav Gupta and
Pranav Khaitan},
title = {Schema-Guided Dialogue State Tracking Task at {DSTC8}},
journal = {CoRR},
volume = {abs/2002.01359},
year = {2020},
url = {https://arxiv.org/abs/2002.01359},
archivePrefix = {arXiv},
eprint = {2002.01359}
}
```
For the initial release paper please cite:
```
@inproceedings{aaai/RastogiZSGK20,
author = {Abhinav Rastogi and
Xiaoxue Zang and
Srinivas Sunkara and
Raghav Gupta and
Pranav Khaitan},
title = {Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided
Dialogue Dataset},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {8689--8696},
publisher = {{AAAI} Press},
year = {2020},
url = {https://aaai.org/ojs/index.php/AAAI/article/view/6394}
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
dura-garage/nep-spell-mixed-eval | ---
license: mit
---
|
open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo | ---
pretty_name: Evaluation run of Sharathhebbar24/chat_gpt2_dpo
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Sharathhebbar24/chat_gpt2_dpo](https://huggingface.co/Sharathhebbar24/chat_gpt2_dpo)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-24T23:16:20.806902](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo/blob/main/results_2024-01-24T23-16-20.806902.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24915779048270345,\n\
\ \"acc_stderr\": 0.030509906389610868,\n \"acc_norm\": 0.25041231816215265,\n\
\ \"acc_norm_stderr\": 0.03132600249114931,\n \"mc1\": 0.2521419828641371,\n\
\ \"mc1_stderr\": 0.015201522246299965,\n \"mc2\": 0.41257163824244014,\n\
\ \"mc2_stderr\": 0.015127188811834062\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.18686006825938567,\n \"acc_stderr\": 0.011391015649694391,\n\
\ \"acc_norm\": 0.23976109215017063,\n \"acc_norm_stderr\": 0.012476304127453954\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.28978291177056364,\n\
\ \"acc_stderr\": 0.004527343651130803,\n \"acc_norm\": 0.3121888070105557,\n\
\ \"acc_norm_stderr\": 0.0046243936909668975\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3037037037037037,\n\
\ \"acc_stderr\": 0.039725528847851375,\n \"acc_norm\": 0.3037037037037037,\n\
\ \"acc_norm_stderr\": 0.039725528847851375\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n\
\ \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.26,\n\
\ \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \
\ \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.23018867924528302,\n \"acc_stderr\": 0.025907897122408173,\n\
\ \"acc_norm\": 0.23018867924528302,\n \"acc_norm_stderr\": 0.025907897122408173\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.19,\n \"acc_stderr\": 0.039427724440366234,\n \
\ \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.039427724440366234\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\"\
: 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2543352601156069,\n\
\ \"acc_stderr\": 0.0332055644308557,\n \"acc_norm\": 0.2543352601156069,\n\
\ \"acc_norm_stderr\": 0.0332055644308557\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n\
\ \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n\
\ \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.26382978723404255,\n \"acc_stderr\": 0.028809989854102973,\n\
\ \"acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.028809989854102973\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n\
\ \"acc_stderr\": 0.04049339297748142,\n \"acc_norm\": 0.24561403508771928,\n\
\ \"acc_norm_stderr\": 0.04049339297748142\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135302,\n\
\ \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135302\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.24074074074074073,\n \"acc_stderr\": 0.02201908001221789,\n \"\
acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.02201908001221789\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.1349206349206349,\n\
\ \"acc_stderr\": 0.030557101589417515,\n \"acc_norm\": 0.1349206349206349,\n\
\ \"acc_norm_stderr\": 0.030557101589417515\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.16,\n \"acc_stderr\": 0.03684529491774708,\n \
\ \"acc_norm\": 0.16,\n \"acc_norm_stderr\": 0.03684529491774708\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.1774193548387097,\n\
\ \"acc_stderr\": 0.02173254068932927,\n \"acc_norm\": 0.1774193548387097,\n\
\ \"acc_norm_stderr\": 0.02173254068932927\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.24630541871921183,\n \"acc_stderr\": 0.030315099285617736,\n\
\ \"acc_norm\": 0.24630541871921183,\n \"acc_norm_stderr\": 0.030315099285617736\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542126,\n \"acc_norm\"\
: 0.28,\n \"acc_norm_stderr\": 0.04512608598542126\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.3282828282828283,\n \"acc_stderr\": 0.03345678422756776,\n \"\
acc_norm\": 0.3282828282828283,\n \"acc_norm_stderr\": 0.03345678422756776\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.37305699481865284,\n \"acc_stderr\": 0.03490205592048573,\n\
\ \"acc_norm\": 0.37305699481865284,\n \"acc_norm_stderr\": 0.03490205592048573\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.02242127361292371,\n\
\ \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.02242127361292371\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.21481481481481482,\n \"acc_stderr\": 0.025040443877000683,\n \
\ \"acc_norm\": 0.21481481481481482,\n \"acc_norm_stderr\": 0.025040443877000683\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.22268907563025211,\n \"acc_stderr\": 0.027025433498882364,\n\
\ \"acc_norm\": 0.22268907563025211,\n \"acc_norm_stderr\": 0.027025433498882364\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.23178807947019867,\n \"acc_stderr\": 0.034454062719870546,\n \"\
acc_norm\": 0.23178807947019867,\n \"acc_norm_stderr\": 0.034454062719870546\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.3302752293577982,\n \"acc_stderr\": 0.02016446633634298,\n \"\
acc_norm\": 0.3302752293577982,\n \"acc_norm_stderr\": 0.02016446633634298\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.19444444444444445,\n \"acc_stderr\": 0.026991454502036733,\n \"\
acc_norm\": 0.19444444444444445,\n \"acc_norm_stderr\": 0.026991454502036733\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.26582278481012656,\n \"acc_stderr\": 0.028756799629658342,\n\
\ \"acc_norm\": 0.26582278481012656,\n \"acc_norm_stderr\": 0.028756799629658342\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.17937219730941703,\n\
\ \"acc_stderr\": 0.0257498195691928,\n \"acc_norm\": 0.17937219730941703,\n\
\ \"acc_norm_stderr\": 0.0257498195691928\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2366412213740458,\n \"acc_stderr\": 0.037276735755969174,\n\
\ \"acc_norm\": 0.2366412213740458,\n \"acc_norm_stderr\": 0.037276735755969174\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.35537190082644626,\n \"acc_stderr\": 0.04369236326573981,\n \"\
acc_norm\": 0.35537190082644626,\n \"acc_norm_stderr\": 0.04369236326573981\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n\
\ \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n\
\ \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.2822085889570552,\n \"acc_stderr\": 0.03536117886664742,\n\
\ \"acc_norm\": 0.2822085889570552,\n \"acc_norm_stderr\": 0.03536117886664742\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.32142857142857145,\n\
\ \"acc_stderr\": 0.04432804055291519,\n \"acc_norm\": 0.32142857142857145,\n\
\ \"acc_norm_stderr\": 0.04432804055291519\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.1941747572815534,\n \"acc_stderr\": 0.03916667762822585,\n\
\ \"acc_norm\": 0.1941747572815534,\n \"acc_norm_stderr\": 0.03916667762822585\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2905982905982906,\n\
\ \"acc_stderr\": 0.02974504857267404,\n \"acc_norm\": 0.2905982905982906,\n\
\ \"acc_norm_stderr\": 0.02974504857267404\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23627075351213284,\n\
\ \"acc_stderr\": 0.015190473717037497,\n \"acc_norm\": 0.23627075351213284,\n\
\ \"acc_norm_stderr\": 0.015190473717037497\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24566473988439305,\n \"acc_stderr\": 0.02317629820399201,\n\
\ \"acc_norm\": 0.24566473988439305,\n \"acc_norm_stderr\": 0.02317629820399201\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24581005586592178,\n\
\ \"acc_stderr\": 0.014400296429225587,\n \"acc_norm\": 0.24581005586592178,\n\
\ \"acc_norm_stderr\": 0.014400296429225587\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.25163398692810457,\n \"acc_stderr\": 0.024848018263875195,\n\
\ \"acc_norm\": 0.25163398692810457,\n \"acc_norm_stderr\": 0.024848018263875195\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.18006430868167203,\n\
\ \"acc_stderr\": 0.021823422857744953,\n \"acc_norm\": 0.18006430868167203,\n\
\ \"acc_norm_stderr\": 0.021823422857744953\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.25617283950617287,\n \"acc_stderr\": 0.024288533637726095,\n\
\ \"acc_norm\": 0.25617283950617287,\n \"acc_norm_stderr\": 0.024288533637726095\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2801418439716312,\n \"acc_stderr\": 0.02678917235114023,\n \
\ \"acc_norm\": 0.2801418439716312,\n \"acc_norm_stderr\": 0.02678917235114023\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24837027379400262,\n\
\ \"acc_stderr\": 0.011035212598034503,\n \"acc_norm\": 0.24837027379400262,\n\
\ \"acc_norm_stderr\": 0.011035212598034503\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.02815637344037142,\n \
\ \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.02815637344037142\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.25,\n \"acc_stderr\": 0.01751781884501444,\n \"acc_norm\"\
: 0.25,\n \"acc_norm_stderr\": 0.01751781884501444\n },\n \"harness|hendrycksTest-public_relations|5\"\
: {\n \"acc\": 0.18181818181818182,\n \"acc_stderr\": 0.03694284335337801,\n\
\ \"acc_norm\": 0.18181818181818182,\n \"acc_norm_stderr\": 0.03694284335337801\n\
\ },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.31020408163265306,\n\
\ \"acc_stderr\": 0.029613459872484378,\n \"acc_norm\": 0.31020408163265306,\n\
\ \"acc_norm_stderr\": 0.029613459872484378\n },\n \"harness|hendrycksTest-sociology|5\"\
: {\n \"acc\": 0.24875621890547264,\n \"acc_stderr\": 0.030567675938916707,\n\
\ \"acc_norm\": 0.24875621890547264,\n \"acc_norm_stderr\": 0.030567675938916707\n\
\ },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\":\
\ 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-virology|5\"\
: {\n \"acc\": 0.19879518072289157,\n \"acc_stderr\": 0.03106939026078942,\n\
\ \"acc_norm\": 0.19879518072289157,\n \"acc_norm_stderr\": 0.03106939026078942\n\
\ },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.29239766081871343,\n\
\ \"acc_stderr\": 0.034886477134579215,\n \"acc_norm\": 0.29239766081871343,\n\
\ \"acc_norm_stderr\": 0.034886477134579215\n },\n \"harness|truthfulqa:mc|0\"\
: {\n \"mc1\": 0.2521419828641371,\n \"mc1_stderr\": 0.015201522246299965,\n\
\ \"mc2\": 0.41257163824244014,\n \"mc2_stderr\": 0.015127188811834062\n\
\ },\n \"harness|winogrande|5\": {\n \"acc\": 0.4996053670086819,\n\
\ \"acc_stderr\": 0.014052481306049512\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```"
repo_url: https://huggingface.co/Sharathhebbar24/chat_gpt2_dpo
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|arc:challenge|25_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|gsm8k|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hellaswag|10_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-24T23-16-20.806902.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-24T23-16-20.806902.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- '**/details_harness|winogrande|5_2024-01-24T23-16-20.806902.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-24T23-16-20.806902.parquet'
- config_name: results
data_files:
- split: 2024_01_24T23_16_20.806902
path:
- results_2024-01-24T23-16-20.806902.parquet
- split: latest
path:
- results_2024-01-24T23-16-20.806902.parquet
---
# Dataset Card for Evaluation run of Sharathhebbar24/chat_gpt2_dpo
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Sharathhebbar24/chat_gpt2_dpo](https://huggingface.co/Sharathhebbar24/chat_gpt2_dpo) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-24T23:16:20.806902](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo/blob/main/results_2024-01-24T23-16-20.806902.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.24915779048270345,
"acc_stderr": 0.030509906389610868,
"acc_norm": 0.25041231816215265,
"acc_norm_stderr": 0.03132600249114931,
"mc1": 0.2521419828641371,
"mc1_stderr": 0.015201522246299965,
"mc2": 0.41257163824244014,
"mc2_stderr": 0.015127188811834062
},
"harness|arc:challenge|25": {
"acc": 0.18686006825938567,
"acc_stderr": 0.011391015649694391,
"acc_norm": 0.23976109215017063,
"acc_norm_stderr": 0.012476304127453954
},
"harness|hellaswag|10": {
"acc": 0.28978291177056364,
"acc_stderr": 0.004527343651130803,
"acc_norm": 0.3121888070105557,
"acc_norm_stderr": 0.0046243936909668975
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.039725528847851375,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.039725528847851375
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123398,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123398
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.23018867924528302,
"acc_stderr": 0.025907897122408173,
"acc_norm": 0.23018867924528302,
"acc_norm_stderr": 0.025907897122408173
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.19,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.19,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2543352601156069,
"acc_stderr": 0.0332055644308557,
"acc_norm": 0.2543352601156069,
"acc_norm_stderr": 0.0332055644308557
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.26382978723404255,
"acc_stderr": 0.028809989854102973,
"acc_norm": 0.26382978723404255,
"acc_norm_stderr": 0.028809989854102973
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.04049339297748142,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.04049339297748142
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2413793103448276,
"acc_stderr": 0.03565998174135302,
"acc_norm": 0.2413793103448276,
"acc_norm_stderr": 0.03565998174135302
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.02201908001221789,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.02201908001221789
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.1349206349206349,
"acc_stderr": 0.030557101589417515,
"acc_norm": 0.1349206349206349,
"acc_norm_stderr": 0.030557101589417515
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.16,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.16,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.1774193548387097,
"acc_stderr": 0.02173254068932927,
"acc_norm": 0.1774193548387097,
"acc_norm_stderr": 0.02173254068932927
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.24630541871921183,
"acc_stderr": 0.030315099285617736,
"acc_norm": 0.24630541871921183,
"acc_norm_stderr": 0.030315099285617736
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542126,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542126
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.3282828282828283,
"acc_stderr": 0.03345678422756776,
"acc_norm": 0.3282828282828283,
"acc_norm_stderr": 0.03345678422756776
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.37305699481865284,
"acc_stderr": 0.03490205592048573,
"acc_norm": 0.37305699481865284,
"acc_norm_stderr": 0.03490205592048573
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.26666666666666666,
"acc_stderr": 0.02242127361292371,
"acc_norm": 0.26666666666666666,
"acc_norm_stderr": 0.02242127361292371
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.21481481481481482,
"acc_stderr": 0.025040443877000683,
"acc_norm": 0.21481481481481482,
"acc_norm_stderr": 0.025040443877000683
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.22268907563025211,
"acc_stderr": 0.027025433498882364,
"acc_norm": 0.22268907563025211,
"acc_norm_stderr": 0.027025433498882364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.23178807947019867,
"acc_stderr": 0.034454062719870546,
"acc_norm": 0.23178807947019867,
"acc_norm_stderr": 0.034454062719870546
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.3302752293577982,
"acc_stderr": 0.02016446633634298,
"acc_norm": 0.3302752293577982,
"acc_norm_stderr": 0.02016446633634298
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.19444444444444445,
"acc_stderr": 0.026991454502036733,
"acc_norm": 0.19444444444444445,
"acc_norm_stderr": 0.026991454502036733
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.26582278481012656,
"acc_stderr": 0.028756799629658342,
"acc_norm": 0.26582278481012656,
"acc_norm_stderr": 0.028756799629658342
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.17937219730941703,
"acc_stderr": 0.0257498195691928,
"acc_norm": 0.17937219730941703,
"acc_norm_stderr": 0.0257498195691928
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2366412213740458,
"acc_stderr": 0.037276735755969174,
"acc_norm": 0.2366412213740458,
"acc_norm_stderr": 0.037276735755969174
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.35537190082644626,
"acc_stderr": 0.04369236326573981,
"acc_norm": 0.35537190082644626,
"acc_norm_stderr": 0.04369236326573981
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2822085889570552,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.2822085889570552,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.32142857142857145,
"acc_stderr": 0.04432804055291519,
"acc_norm": 0.32142857142857145,
"acc_norm_stderr": 0.04432804055291519
},
"harness|hendrycksTest-management|5": {
"acc": 0.1941747572815534,
"acc_stderr": 0.03916667762822585,
"acc_norm": 0.1941747572815534,
"acc_norm_stderr": 0.03916667762822585
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2905982905982906,
"acc_stderr": 0.02974504857267404,
"acc_norm": 0.2905982905982906,
"acc_norm_stderr": 0.02974504857267404
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23627075351213284,
"acc_stderr": 0.015190473717037497,
"acc_norm": 0.23627075351213284,
"acc_norm_stderr": 0.015190473717037497
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24566473988439305,
"acc_stderr": 0.02317629820399201,
"acc_norm": 0.24566473988439305,
"acc_norm_stderr": 0.02317629820399201
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24581005586592178,
"acc_stderr": 0.014400296429225587,
"acc_norm": 0.24581005586592178,
"acc_norm_stderr": 0.014400296429225587
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.25163398692810457,
"acc_stderr": 0.024848018263875195,
"acc_norm": 0.25163398692810457,
"acc_norm_stderr": 0.024848018263875195
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.18006430868167203,
"acc_stderr": 0.021823422857744953,
"acc_norm": 0.18006430868167203,
"acc_norm_stderr": 0.021823422857744953
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.25617283950617287,
"acc_stderr": 0.024288533637726095,
"acc_norm": 0.25617283950617287,
"acc_norm_stderr": 0.024288533637726095
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2801418439716312,
"acc_stderr": 0.02678917235114023,
"acc_norm": 0.2801418439716312,
"acc_norm_stderr": 0.02678917235114023
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.24837027379400262,
"acc_stderr": 0.011035212598034503,
"acc_norm": 0.24837027379400262,
"acc_norm_stderr": 0.011035212598034503
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.3125,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.3125,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.25,
"acc_stderr": 0.01751781884501444,
"acc_norm": 0.25,
"acc_norm_stderr": 0.01751781884501444
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.18181818181818182,
"acc_stderr": 0.03694284335337801,
"acc_norm": 0.18181818181818182,
"acc_norm_stderr": 0.03694284335337801
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.31020408163265306,
"acc_stderr": 0.029613459872484378,
"acc_norm": 0.31020408163265306,
"acc_norm_stderr": 0.029613459872484378
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24875621890547264,
"acc_stderr": 0.030567675938916707,
"acc_norm": 0.24875621890547264,
"acc_norm_stderr": 0.030567675938916707
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-virology|5": {
"acc": 0.19879518072289157,
"acc_stderr": 0.03106939026078942,
"acc_norm": 0.19879518072289157,
"acc_norm_stderr": 0.03106939026078942
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.29239766081871343,
"acc_stderr": 0.034886477134579215,
"acc_norm": 0.29239766081871343,
"acc_norm_stderr": 0.034886477134579215
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2521419828641371,
"mc1_stderr": 0.015201522246299965,
"mc2": 0.41257163824244014,
"mc2_stderr": 0.015127188811834062
},
"harness|winogrande|5": {
"acc": 0.4996053670086819,
"acc_stderr": 0.014052481306049512
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ML4SE2023-G1-WizardCoder/ML4SE23_G1_HumanEval-SCoT | ---
task_categories:
- text-generation
pretty_name: HumanEval dataset enhanced with Structured-Chain-of-Thought
size_categories:
- n<1K
language:
- en
---
HumanEval dataset enhanced with Structured-Chain-of-Thought |
sethapun/imdb_misspelled_10 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 33634327
num_examples: 25000
- name: validation
num_bytes: 32850518
num_examples: 25000
download_size: 52525337
dataset_size: 66484845
---
# Dataset Card for "imdb_misspelled_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_wnli_relativizer_where | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 124
num_examples: 1
- name: test
num_bytes: 1162
num_examples: 4
- name: train
num_bytes: 6970
num_examples: 26
download_size: 12284
dataset_size: 8256
---
# Dataset Card for "MULTI_VALUE_wnli_relativizer_where"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
P1ayer-1/crash_course_subs | ---
dataset_info:
features:
- name: channel_id
dtype: string
- name: channel_url
dtype: string
- name: video_name
dtype: string
- name: video_id
dtype: string
- name: duration
dtype: int64
- name: chapters
list:
- name: start_time
dtype: float64
- name: title
dtype: string
- name: end_time
dtype: float64
- name: subtitles
list:
- name: text
dtype: string
- name: timestamp
sequence: float64
- name: timed_subtitles
sequence:
sequence: string
splits:
- name: train
num_bytes: 59167266.5573988
num_examples: 1473
download_size: 25720094
dataset_size: 59167266.5573988
---
# Dataset Card for "crash_course_subs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
swaroop-nath/prompt-opin-summ | ---
license: mit
task_categories:
- summarization
language:
- en
tags:
- e-commerce
pretty_name: prompt-opin-summ
size_categories:
- 10K<n<100K
---
This repository consists dataset for training Opinion Summarization models.
The dataset has been generated using Mistral-`7`B ([mistralai/Mistral-`7`B](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)).
The dataset includes [atmost] `9` opinion summaries per product, for `20763` products in the train set and `5000` products in the validation set.
The dataset is formatted as a jsonl file ([jsonlines-guide](https://jsonlines.org/)). Each line can be loaded as a json object, and has the following format:
{\
'`unique-id`': *a unique id*,\
'`reviews`': *list of reviews*,\
'`summaries`': [\
'`summary_text`': *the opinion summary generated by Mistral-`7`B*,\
'`score`': {\
'`aspect-coverage`': *score (`0` to `5`; higher is better) assigned for aspect-coverage*,\
'`opinion-faithfulness`': *score (`0` to `5`; higher is better) assigned for aspect-coverage*,\
'`opinion-coverage`': *score (`0` to `5`; higher is better) assigned for aspect-coverage*,\
'`relevance`': *score (`0` to `5`; higher is better) assigned for aspect-coverage*,\
'`conciseness`': *score (`0` to `5`; higher is better) assigned for aspect-coverage*,\
'`hallucination`': *score (`0` to `5`; higher is better) assigned for aspect-coverage*,\
'`language-correctness`': *score assigned (`0` to `5`; higher is better) for aspect-coverage*,\
}\
'`is-good`': *bool value specifying if the summary is generated to be a good one*,\
'`is-sbad`': *bool value specifying if the summary is generated to be a slightly bad one*,\
'`is-vbad`': *bool value specifying if the summary is generated to be a very bad one*\
]\
}
Started code (in `Python`) to read the dataset:
`def read_dataset(data_path: str) --> List[Dict]:`\
`with open(data_path, 'r') as file:`\
`lines = file.readlines()`\
`dataset = [json.loads(line) for line in lines] # requires the json library`\
`return dataset`
Basic statistics on the dataset:
| **Split** | **Characteristic** | \\(\mu\\) | \\(\sigma\\) |
|----------------------------|-------------------------|---------|----------|
| train | # reviews | \\(13.24\\) | \\(10.07\\) |
| | # summaries | \\(8.90\\) | \\(0.34\\) |
| | # words in review | \\(49.0\\) | \\(10.78\\) |
| | # words in summary | \\(78.28\\) | \\(34.45\\) |
| valid | # reviews | \\(10.53\\) | \\(6.80\\) |
| | # summaries | \\(8.98\\) | \\(0.16\\) |
| | # words in review | \\(48.65\\) | \\(10.63\\) |
| | # words in summary | \\(74.26\\) | \\(34.27\\) |
|
linda996/98972 | ---
license: openrail
---
|
open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots | ---
pretty_name: Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [openaccess-ai-collective/openhermes-2_5-dpo-no-robots](https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-02T15:08:25.673533](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots/blob/main/results_2023-12-02T15-08-25.673533.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5526914329037149,\n\
\ \"acc_stderr\": 0.013695795709089898\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.5526914329037149,\n \"acc_stderr\": 0.013695795709089898\n\
\ }\n}\n```"
repo_url: https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_02T15_08_25.673533
path:
- '**/details_harness|gsm8k|5_2023-12-02T15-08-25.673533.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-02T15-08-25.673533.parquet'
- config_name: results
data_files:
- split: 2023_12_02T15_08_25.673533
path:
- results_2023-12-02T15-08-25.673533.parquet
- split: latest
path:
- results_2023-12-02T15-08-25.673533.parquet
---
# Dataset Card for Evaluation run of openaccess-ai-collective/openhermes-2_5-dpo-no-robots
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [openaccess-ai-collective/openhermes-2_5-dpo-no-robots](https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T15:08:25.673533](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__openhermes-2_5-dpo-no-robots/blob/main/results_2023-12-02T15-08-25.673533.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5526914329037149,
"acc_stderr": 0.013695795709089898
},
"harness|gsm8k|5": {
"acc": 0.5526914329037149,
"acc_stderr": 0.013695795709089898
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
irds/lotte_lifestyle_dev_search | ---
pretty_name: '`lotte/lifestyle/dev/search`'
viewer: false
source_datasets: ['irds/lotte_lifestyle_dev']
task_categories:
- text-retrieval
---
# Dataset Card for `lotte/lifestyle/dev/search`
The `lotte/lifestyle/dev/search` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/lotte#lotte/lifestyle/dev/search).
# Data
This dataset provides:
- `queries` (i.e., topics); count=417
- `qrels`: (relevance assessments); count=1,376
- For `docs`, use [`irds/lotte_lifestyle_dev`](https://huggingface.co/datasets/irds/lotte_lifestyle_dev)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/lotte_lifestyle_dev_search', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/lotte_lifestyle_dev_search', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Santhanam2021ColBERTv2,
title = "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction",
author = "Keshav Santhanam and Omar Khattab and Jon Saad-Falcon and Christopher Potts and Matei Zaharia",
journal= "arXiv preprint arXiv:2112.01488",
year = "2021",
url = "https://arxiv.org/abs/2112.01488"
}
```
|
michaelmallari/airbnb-usa-or-portland | ---
license: mit
---
|
jjinni2/new2maywell | ---
license: unknown
---
|
unigram/fol-05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: string
- name: proof
dtype: string
- name: premise_tptp
dtype: string
- name: hypothesis_tptp
dtype: string
- name: deberta_pred
dtype: string
- name: deberta_pred_r1_label
dtype: string
- name: deberta_pred_r2_label
dtype: string
- name: deberta_pred_r3_label
dtype: string
- name: deberta_pred_r4_label
dtype: string
- name: deberta_pred_r5_label
dtype: string
splits:
- name: train
num_bytes: 2386784
num_examples: 255
- name: validation
num_bytes: 1345424
num_examples: 153
- name: test
num_bytes: 1268544
num_examples: 141
download_size: 912336
dataset_size: 5000752
---
# Dataset Card for "fol-05"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hendrycks/competition_math | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Mathematics Aptitude Test of Heuristics (MATH)
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
tags:
- explanation-generation
dataset_info:
features:
- name: problem
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 5984788
num_examples: 7500
- name: test
num_bytes: 3732575
num_examples: 5000
download_size: 20327424
dataset_size: 9717363
---
# Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/hendrycks/math
- **Repository:** https://github.com/hendrycks/math
- **Paper:** https://arxiv.org/pdf/2103.03874.pdf
- **Leaderboard:** N/A
- **Point of Contact:** Dan Hendrycks
### Dataset Summary
The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems
from mathematics competitions, including the AMC 10, AMC 12, AIME, and more.
Each problem in MATH has a full step-by-step solution, which can be used to teach
models to generate answer derivations and explanations.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's `\boxed` tag.
An example from the dataset is:
```
{'problem': 'A board game spinner is divided into three parts labeled $A$, $B$ and $C$. The probability of the spinner landing on $A$ is $\\frac{1}{3}$ and the probability of the spinner landing on $B$ is $\\frac{5}{12}$. What is the probability of the spinner landing on $C$? Express your answer as a common fraction.',
'level': 'Level 1',
'type': 'Counting & Probability',
'solution': 'The spinner is guaranteed to land on exactly one of the three regions, so we know that the sum of the probabilities of it landing in each region will be 1. If we let the probability of it landing in region $C$ be $x$, we then have the equation $1 = \\frac{5}{12}+\\frac{1}{3}+x$, from which we have $x=\\boxed{\\frac{1}{4}}$.'}
```
### Data Fields
* `problem`: The competition math problem.
* `solution`: The step-by-step solution.
* `level`: The problem's difficulty level from 'Level 1' to 'Level 5', where a subject's easiest problems for humans are assigned to 'Level 1' and a subject's hardest problems are assigned to 'Level 5'.
* `type`: The subject of the problem: Algebra, Counting & Probability, Geometry, Intermediate Algebra, Number Theory, Prealgebra and Precalculus.
### Data Splits
* train: 7,500 examples
* test: 5,000 examples
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
https://github.com/hendrycks/math/blob/main/LICENSE
### Citation Information
```bibtex
@article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks
and Collin Burns
and Saurav Kadavath
and Akul Arora
and Steven Basart
and Eric Tang
and Dawn Song
and Jacob Steinhardt},
journal={arXiv preprint arXiv:2103.03874},
year={2021}
}
```
### Contributions
Thanks to [@hacobe](https://github.com/hacobe) for adding this dataset. |
tollefj/sts16-sts-NOB | ---
license: cc-by-4.0
---
# Translated STS dataset to Norwegian Bokmål
Machine translated using the *No language left behind* model series, specifically the 1.3B variant: https://huggingface.co/facebook/nllb-200-distilled-1.3B |
danbider/chat-formatted-metamath | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 294232804
num_examples: 395000
download_size: 129407314
dataset_size: 294232804
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
allenai/ropes | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: ropes
pretty_name: ROPES
dataset_info:
config_name: plain_text
features:
- name: id
dtype: string
- name: background
dtype: string
- name: situation
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
splits:
- name: train
num_bytes: 12231892
num_examples: 10924
- name: test
num_bytes: 1928508
num_examples: 1710
- name: validation
num_bytes: 1643474
num_examples: 1688
download_size: 1372548
dataset_size: 15803874
configs:
- config_name: plain_text
data_files:
- split: train
path: plain_text/train-*
- split: test
path: plain_text/test-*
- split: validation
path: plain_text/validation-*
default: true
---
# Dataset Card for ROPES
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ROPES dataset](https://allenai.org/data/ropes)
- **Paper:** [Reasoning Over Paragraph Effects in Situations](https://arxiv.org/abs/1908.05852)
- **Leaderboard:** [ROPES leaderboard](https://leaderboard.allenai.org/ropes)
### Dataset Summary
ROPES (Reasoning Over Paragraph Effects in Situations) is a QA dataset which tests a system's ability to apply knowledge from a passage of text to a new situation. A system is presented a background passage containing a causal or qualitative relation(s) (e.g., "animal pollinators increase efficiency of fertilization in flowers"), a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation.
### Supported Tasks and Leaderboards
The reading comprehension task is framed as an extractive question answering problem.
Models are evaluated by computing word-level F1 and exact match (EM) metrics, following common practice for recent reading comprehension datasets (e.g., SQuAD).
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Data closely follow the SQuAD v1.1 format. An example looks like this:
```
{
"id": "2058517998",
"background": "Cancer is a disease that causes cells to divide out of control. Normally, the body has systems that prevent cells from dividing out of control. But in the case of cancer, these systems fail. Cancer is usually caused by mutations. Mutations are random errors in genes. Mutations that lead to cancer usually happen to genes that control the cell cycle. Because of the mutations, abnormal cells divide uncontrollably. This often leads to the development of a tumor. A tumor is a mass of abnormal tissue. As a tumor grows, it may harm normal tissues around it. Anything that can cause cancer is called a carcinogen . Carcinogens may be pathogens, chemicals, or radiation.",
"situation": "Jason recently learned that he has cancer. After hearing this news, he convinced his wife, Charlotte, to get checked out. After running several tests, the doctors determined Charlotte has no cancer, but she does have high blood pressure. Relieved at this news, Jason was now focused on battling his cancer and fighting as hard as he could to survive.",
"question": "Whose cells are dividing more rapidly?",
"answers": {
"text": ["Jason"]
},
}
```
### Data Fields
- `id`: identification
- `background`: background passage
- `situation`: the grounding situation
- `question`: the question to answer
- `answers`: the answer text which is a span from either the situation or the question. The text list always contain a single element.
Note that the answers for the test set are hidden (and thus represented as an empty list). Predictions for the test set should be submitted to the leaderboard.
### Data Splits
The dataset contains 14k QA pairs over 1.7K paragraphs, split between train (10k QAs), development (1.6k QAs) and a hidden test partition (1.7k QAs).
## Dataset Creation
### Curation Rationale
From the original paper:
*ROPES challenges reading comprehension models to handle more difficult phenomena: understanding the implications of a passage of text. ROPES is also particularly related to datasets focusing on "multi-hop reasoning", as by construction answering questions in ROPES requires connecting information from multiple parts of a given passage.*
*We constructed ROPES by first collecting background passages from science textbooks and Wikipedia articles that describe causal relationships. We showed the collected paragraphs to crowd workers and asked them to write situations that involve the relationships found in the background passage, and questions that connect the situation and the background using the causal relationships. The answers are spans from either the situation or the question. The dataset consists of 14,322 questions from various domains, mostly in science and economics.*
### Source Data
From the original paper:
*We automatically scraped passages from science textbooks and Wikipedia that contained causal connectives eg. ”causes,” ”leads to,” and keywords that signal qualitative relations, e.g. ”increases,” ”decreases.”. We then manually filtered out the passages that do not have at least one relation. The passages can be categorized into physical science (49%), life science (45%), economics (5%) and other (1%). In total, we collected over 1,000 background passages.*
#### Initial Data Collection and Normalization
From the original paper:
*We used Amazon Mechanical Turk (AMT) to generate the situations, questions, and answers. The AMT workers were given background passages and asked to write situations that involved the relation(s) in the background passage. The AMT workers then authored questions about the situation that required both the background and the situation to answer. In each human intelligence task (HIT), AMT workers are given 5 background passages to select from and are asked to create a total of 10 questions. To mitigate the potential for easy lexical shortcuts in the dataset, the workers were encouraged via instructions to write questions in minimal pairs, where a very small change in the question results in a different answer.*
*Most questions are designed to have two sensible answer choices (eg. “more” vs. “less”).*
To reduce annotator bias, training and evaluation sets are writter by different annotators.
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The data is distributed under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
```
@inproceedings{Lin2019ReasoningOP,
title={Reasoning Over Paragraph Effects in Situations},
author={Kevin Lin and Oyvind Tafjord and Peter Clark and Matt Gardner},
booktitle={MRQA@EMNLP},
year={2019}
}
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
open-llm-leaderboard/details_Gille__StrangeMerges_28-7B-dare_ties | ---
pretty_name: Evaluation run of Gille/StrangeMerges_28-7B-dare_ties
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Gille/StrangeMerges_28-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_28-7B-dare_ties)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Gille__StrangeMerges_28-7B-dare_ties\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-02T23:20:41.709141](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_28-7B-dare_ties/blob/main/results_2024-04-02T23-20-41.709141.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.652549933565694,\n\
\ \"acc_stderr\": 0.03216212761875025,\n \"acc_norm\": 0.6521949973827587,\n\
\ \"acc_norm_stderr\": 0.032830802430059286,\n \"mc1\": 0.627906976744186,\n\
\ \"mc1_stderr\": 0.01692109011881403,\n \"mc2\": 0.7754925522086183,\n\
\ \"mc2_stderr\": 0.013783768613942371\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7056313993174061,\n \"acc_stderr\": 0.013318528460539422,\n\
\ \"acc_norm\": 0.7218430034129693,\n \"acc_norm_stderr\": 0.013094469919538805\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.716391157140012,\n\
\ \"acc_stderr\": 0.004498280244494493,\n \"acc_norm\": 0.8907588129854611,\n\
\ \"acc_norm_stderr\": 0.0031130406065401238\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n\
\ \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n\
\ \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \
\ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249386,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249386\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n\
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.04784060704105654,\n\
\ \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.04784060704105654\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.548936170212766,\n \"acc_stderr\": 0.032529096196131965,\n\
\ \"acc_norm\": 0.548936170212766,\n \"acc_norm_stderr\": 0.032529096196131965\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n\
\ \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.43386243386243384,\n \"acc_stderr\": 0.02552503438247489,\n \"\
acc_norm\": 0.43386243386243384,\n \"acc_norm_stderr\": 0.02552503438247489\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n\
\ \"acc_stderr\": 0.023904914311782655,\n \"acc_norm\": 0.7709677419354839,\n\
\ \"acc_norm_stderr\": 0.023904914311782655\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5369458128078818,\n \"acc_stderr\": 0.035083705204426656,\n\
\ \"acc_norm\": 0.5369458128078818,\n \"acc_norm_stderr\": 0.035083705204426656\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n\
\ \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919436,\n\
\ \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919436\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402534,\n\
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402534\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.02882088466625326,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.02882088466625326\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135367,\n\
\ \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135367\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.41721854304635764,\n \"acc_stderr\": 0.04026141497634611,\n \"\
acc_norm\": 0.41721854304635764,\n \"acc_norm_stderr\": 0.04026141497634611\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"\
acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8431372549019608,\n\
\ \"acc_stderr\": 0.025524722324553353,\n \"acc_norm\": 0.8431372549019608,\n\
\ \"acc_norm_stderr\": 0.025524722324553353\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n\
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7520661157024794,\n \"acc_stderr\": 0.03941897526516302,\n \"\
acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.03941897526516302\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n\
\ \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n\
\ \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n\
\ \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n\
\ \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.020588491316092368,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.020588491316092368\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8237547892720306,\n\
\ \"acc_stderr\": 0.013625556907993462,\n \"acc_norm\": 0.8237547892720306,\n\
\ \"acc_norm_stderr\": 0.013625556907993462\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7254335260115607,\n \"acc_stderr\": 0.02402774515526502,\n\
\ \"acc_norm\": 0.7254335260115607,\n \"acc_norm_stderr\": 0.02402774515526502\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.43575418994413406,\n\
\ \"acc_stderr\": 0.016583881958602394,\n \"acc_norm\": 0.43575418994413406,\n\
\ \"acc_norm_stderr\": 0.016583881958602394\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.025829163272757482,\n\
\ \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.025829163272757482\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n\
\ \"acc_stderr\": 0.02575586592263295,\n \"acc_norm\": 0.7106109324758842,\n\
\ \"acc_norm_stderr\": 0.02575586592263295\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460842,\n\
\ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460842\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5035460992907801,\n \"acc_stderr\": 0.02982674915328092,\n \
\ \"acc_norm\": 0.5035460992907801,\n \"acc_norm_stderr\": 0.02982674915328092\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46936114732724904,\n\
\ \"acc_stderr\": 0.012746237711716634,\n \"acc_norm\": 0.46936114732724904,\n\
\ \"acc_norm_stderr\": 0.012746237711716634\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6985294117647058,\n \"acc_stderr\": 0.027875982114273168,\n\
\ \"acc_norm\": 0.6985294117647058,\n \"acc_norm_stderr\": 0.027875982114273168\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6715686274509803,\n \"acc_stderr\": 0.018999707383162673,\n \
\ \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.018999707383162673\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n\
\ \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454115,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454115\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5783132530120482,\n\
\ \"acc_stderr\": 0.03844453181770917,\n \"acc_norm\": 0.5783132530120482,\n\
\ \"acc_norm_stderr\": 0.03844453181770917\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n\
\ \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.627906976744186,\n\
\ \"mc1_stderr\": 0.01692109011881403,\n \"mc2\": 0.7754925522086183,\n\
\ \"mc2_stderr\": 0.013783768613942371\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.835043409629045,\n \"acc_stderr\": 0.010430917468237431\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6815769522365428,\n \
\ \"acc_stderr\": 0.012832225723075408\n }\n}\n```"
repo_url: https://huggingface.co/Gille/StrangeMerges_28-7B-dare_ties
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|arc:challenge|25_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|gsm8k|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hellaswag|10_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-02T23-20-41.709141.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-02T23-20-41.709141.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- '**/details_harness|winogrande|5_2024-04-02T23-20-41.709141.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-02T23-20-41.709141.parquet'
- config_name: results
data_files:
- split: 2024_04_02T23_20_41.709141
path:
- results_2024-04-02T23-20-41.709141.parquet
- split: latest
path:
- results_2024-04-02T23-20-41.709141.parquet
---
# Dataset Card for Evaluation run of Gille/StrangeMerges_28-7B-dare_ties
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Gille/StrangeMerges_28-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_28-7B-dare_ties) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Gille__StrangeMerges_28-7B-dare_ties",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-02T23:20:41.709141](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_28-7B-dare_ties/blob/main/results_2024-04-02T23-20-41.709141.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.652549933565694,
"acc_stderr": 0.03216212761875025,
"acc_norm": 0.6521949973827587,
"acc_norm_stderr": 0.032830802430059286,
"mc1": 0.627906976744186,
"mc1_stderr": 0.01692109011881403,
"mc2": 0.7754925522086183,
"mc2_stderr": 0.013783768613942371
},
"harness|arc:challenge|25": {
"acc": 0.7056313993174061,
"acc_stderr": 0.013318528460539422,
"acc_norm": 0.7218430034129693,
"acc_norm_stderr": 0.013094469919538805
},
"harness|hellaswag|10": {
"acc": 0.716391157140012,
"acc_stderr": 0.004498280244494493,
"acc_norm": 0.8907588129854611,
"acc_norm_stderr": 0.0031130406065401238
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249386,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249386
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3627450980392157,
"acc_stderr": 0.04784060704105654,
"acc_norm": 0.3627450980392157,
"acc_norm_stderr": 0.04784060704105654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.548936170212766,
"acc_stderr": 0.032529096196131965,
"acc_norm": 0.548936170212766,
"acc_norm_stderr": 0.032529096196131965
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.43386243386243384,
"acc_stderr": 0.02552503438247489,
"acc_norm": 0.43386243386243384,
"acc_norm_stderr": 0.02552503438247489
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782655,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782655
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5369458128078818,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.5369458128078818,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.028057791672989017,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.028057791672989017
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919436,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919436
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402534,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402534
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.02882088466625326,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.02882088466625326
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135367,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135367
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.41721854304635764,
"acc_stderr": 0.04026141497634611,
"acc_norm": 0.41721854304635764,
"acc_norm_stderr": 0.04026141497634611
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.025524722324553353,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.025524722324553353
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7520661157024794,
"acc_stderr": 0.03941897526516302,
"acc_norm": 0.7520661157024794,
"acc_norm_stderr": 0.03941897526516302
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092368,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092368
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8237547892720306,
"acc_stderr": 0.013625556907993462,
"acc_norm": 0.8237547892720306,
"acc_norm_stderr": 0.013625556907993462
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7254335260115607,
"acc_stderr": 0.02402774515526502,
"acc_norm": 0.7254335260115607,
"acc_norm_stderr": 0.02402774515526502
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.43575418994413406,
"acc_stderr": 0.016583881958602394,
"acc_norm": 0.43575418994413406,
"acc_norm_stderr": 0.016583881958602394
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7156862745098039,
"acc_stderr": 0.025829163272757482,
"acc_norm": 0.7156862745098039,
"acc_norm_stderr": 0.025829163272757482
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.02575586592263295,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.02575586592263295
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460842,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460842
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5035460992907801,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.5035460992907801,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46936114732724904,
"acc_stderr": 0.012746237711716634,
"acc_norm": 0.46936114732724904,
"acc_norm_stderr": 0.012746237711716634
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6985294117647058,
"acc_stderr": 0.027875982114273168,
"acc_norm": 0.6985294117647058,
"acc_norm_stderr": 0.027875982114273168
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6715686274509803,
"acc_stderr": 0.018999707383162673,
"acc_norm": 0.6715686274509803,
"acc_norm_stderr": 0.018999707383162673
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454115,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454115
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5783132530120482,
"acc_stderr": 0.03844453181770917,
"acc_norm": 0.5783132530120482,
"acc_norm_stderr": 0.03844453181770917
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.627906976744186,
"mc1_stderr": 0.01692109011881403,
"mc2": 0.7754925522086183,
"mc2_stderr": 0.013783768613942371
},
"harness|winogrande|5": {
"acc": 0.835043409629045,
"acc_stderr": 0.010430917468237431
},
"harness|gsm8k|5": {
"acc": 0.6815769522365428,
"acc_stderr": 0.012832225723075408
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
npvinHnivqn/VietEngDictionary | ---
license: afl-3.0
task_categories:
- translation
language:
- vi
- en
size_categories:
- 10K<n<100K
--- |
Wiritpol/mC4-th-clean | ---
license: apache-2.0
language:
- th
pretty_name: mC4-th-size(ss)
size_categories:
- 100K<n<1M
---
mC4 - Thai (Clean) - Size(ss) 187M Tokens (6.67% of mC4-th-clean, ~2.8B Tokens)
---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 1881070925
num_examples: 494539
download_size: 739477768
dataset_size: 1881070925
--- |
uproai/endex-700k-ns | ---
dataset_info:
features:
- name: text_a
dtype: string
- name: text_b
dtype: string
- name: RE
dtype: float64
- name: AR
dtype: float64
- name: BE
dtype: float64
- name: EE
dtype: float64
- name: num_score
dtype: float64
- name: labels
dtype: int64
- name: 'Unnamed: 0'
dtype: float64
- name: toxicity
dtype: float64
- name: sentiment
dtype: float64
- name: engagingness_score
dtype: float64
- name: method
dtype: string
splits:
- name: train
num_bytes: 403461950
num_examples: 686046
download_size: 251097681
dataset_size: 403461950
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- text-classification
language:
- en
tags:
- endex
- roleplay
---
Train data of https://github.com/gxxu-ml/endex |
bigscience-data/roots_id_indonesian_frog_storytelling_corpus | ---
language: id
license: cc-by-sa-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_id_indonesian_frog_storytelling_corpus
# Indonesian Frog Storytelling Corpus
- Dataset uid: `indonesian_frog_storytelling_corpus`
### Description
Indonesian written and spoken corpus, based on the [twenty-eight pictures](http://compling.hss.ntu.edu.sg/who/david/corpus/pictures.pdf)
The storytelling survey was conducted in Malang in August 2011 and in Tokyo in September
and October 2011. The consultants were from different backgrounds (as for age, place of birth, place
of growing up, and mother tongue). All had received a high school-level or university-level education.
All could be classified as belonging to the middle class socially.
http://www.tufs.ac.jp/st/personal/13/david/papers/APLL5_davidmoeljadi.pdf
### Homepage
https://github.com/davidmoeljadi/corpus-frog-storytelling
### Licensing
- open license
- cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
### Speaker Locations
- Indonesia
- Japan
### Sizes
- 0.0000 % of total
- 0.0047 % of id
### BigScience processing steps
#### Filters applied to: id
- no
- processing
- at
- all
- as
- unnecessary
- dedup
- was
- caught
- late
|
projecte-aina/wnli-ca | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: wnli-ca
size_categories:
- unknown
source_datasets:
- extended|glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# WNLI-ca
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
- **Point of Contact:** [Language Technologies Unit](langtech@bsc.es)
### Dataset Summary
"A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from Terry Winograd." Source: [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
The [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
This dataset is a professional translation into Catalan of [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) as published in [GLUE Benchmark](https://gluebenchmark.com/tasks).
Both the original dataset and this translation are licenced under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model.
### Languages
The dataset is in Catalan (`ca-ES`)
## Dataset Structure
### Data Instances
Three tsv files.
### Data Fields
- index
- sentence 1: first sentence of the pair
- sentence 2: second sentence of the pair
- label: relation between the two sentences:
* 0: the second sentence does not entail a correct interpretation of the first one (neutral)
* 1: the second sentence entails a correct interpretation of the first one (entailment)
### Example
| index | sentence 1 | sentence 2 | label |
| ------- |----------- | --------- | ----- |
| 0 | Vaig clavar una agulla en una pastanaga. Quan la vaig treure, tenia un forat. | La pastanaga tenia un forat. | 1 |
| 1 | En Joan no podia veure l’escenari amb en Guillem davant seu perquè és molt baix. | En Joan és molt baix. | 1 |
| 2 | Els policies van arrestar tots els membres de la banda. Volien aturar el tràfic de drogues del barri. | Els policies volien aturar el tràfic de drogues del barri. | 1 |
| 3 | L’Esteve segueix els passos d’en Frederic en tot. L’influencia moltíssim. | L’Esteve l’influencia moltíssim. | 0 |
### Data Splits
- wnli-train-ca.csv: 636
- wnli-dev-ca.csv: 72
- wnli-test-shuffled-ca.csv: 147
## Dataset Creation
### Curation Rationale
We translated this dataset to contribute to the development of language models in Catalan, a low-resource language, and to allow inter-lingual comparisons.
### Source Data
- [GLUE Benchmark site](https://gluebenchmark.com)
#### Initial Data Collection and Normalization
This is a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan, commissioned by BSC TeMU within the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
#### Who are the source language producers?
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
### Annotations
#### Annotation process
We comissioned a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Catalan.
#### Who are the annotators?
Translation was commisioned to a professional translator.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
### Contributions
[N/A]
|
liuyanchen1015/MULTI_VALUE_wnli_chaining_main_verbs | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 311
num_examples: 2
- name: train
num_bytes: 1723
num_examples: 10
download_size: 6103
dataset_size: 2034
---
# Dataset Card for "MULTI_VALUE_wnli_chaining_main_verbs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/mamiya_kanoko_watashinoyuriwaoshigotodesu | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Mamiya Kanoko
This is the dataset of Mamiya Kanoko, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 672 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 300 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 300 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 672 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 672 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 672 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Kriyans/ner | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: wnut-2017-emerging-and-rare-entity
pretty_name: WNUT 17
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-corporation
'2': I-corporation
'3': B-creative-work
'4': I-creative-work
'5': B-group
'6': I-group
'7': B-location
'8': I-location
'9': B-person
'10': I-person
'11': B-product
'12': I-product
config_name: wnut_17
splits:
- name: train
num_bytes: 1078379
num_examples: 3394
- name: validation
num_bytes: 259383
num_examples: 1009
- name: test
num_bytes: 405536
num_examples: 1287
download_size: 800955
dataset_size: 1743298
---
# Dataset Card for "wnut_17"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://noisy-text.github.io/2017/emerging-rare-entities.html](http://noisy-text.github.io/2017/emerging-rare-entities.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.80 MB
- **Size of the generated dataset:** 1.74 MB
- **Total amount of disk used:** 2.55 MB
### Dataset Summary
WNUT 17: Emerging and Rare entity recognition
This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions.
Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation),
but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms.
Take for example the tweet “so.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve.
This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text.
The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 0.80 MB
- **Size of the generated dataset:** 1.74 MB
- **Total amount of disk used:** 2.55 MB
An example of 'train' looks as follows.
```
{
"id": "0",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0],
"tokens": ["@paulwalk", "It", "'s", "the", "view", "from", "where", "I", "'m", "living", "for", "two", "weeks", ".", "Empire", "State", "Building", "=", "ESB", ".", "Pretty", "bad", "storm", "here", "last", "evening", "."]
}
```
### Data Fields
The data fields are the same among all splits:
- `id` (`string`): ID of the example.
- `tokens` (`list` of `string`): Tokens of the example text.
- `ner_tags` (`list` of class labels): NER tags of the tokens (using IOB2 format), with possible values:
- 0: `O`
- 1: `B-corporation`
- 2: `I-corporation`
- 3: `B-creative-work`
- 4: `I-creative-work`
- 5: `B-group`
- 6: `I-group`
- 7: `B-location`
- 8: `I-location`
- 9: `B-person`
- 10: `I-person`
- 11: `B-product`
- 12: `I-product`
### Data Splits
|train|validation|test|
|----:|---------:|---:|
| 3394| 1009|1287|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{derczynski-etal-2017-results,
title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition",
author = "Derczynski, Leon and
Nichols, Eric and
van Erp, Marieke and
Limsopatham, Nut",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W17-4418",
doi = "10.18653/v1/W17-4418",
pages = "140--147",
abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions.
Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization),
but recall on them is a real problem in noisy text - even among annotators.
This drop tends to be due to novel entities and surface forms.
Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'}
hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities,
and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the
ability of participating entries to detect and classify novel and emerging named entities in noisy text.",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@stefan-it](https://github.com/stefan-it), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu) for adding this dataset. |
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895592 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: sshleifer/distilbart-cnn-12-6
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-cnn-12-6
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
joey234/mmlu-formal_logic-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
- name: neg_prompt
dtype: string
- name: fewshot_context_neg
dtype: string
- name: fewshot_context_ori
dtype: string
splits:
- name: dev
num_bytes: 9134
num_examples: 5
- name: test
num_bytes: 1353581
num_examples: 126
download_size: 164902
dataset_size: 1362715
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-formal_logic-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zhangshuoming/numericbench_eval | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 330123
num_examples: 936
download_size: 118358
dataset_size: 330123
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "numericbench_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
quirky-lats-at-mats/hp_saq_hard | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: true_answer
dtype: string
- name: false_answer
dtype: string
splits:
- name: train
num_bytes: 148972
num_examples: 1239
download_size: 73509
dataset_size: 148972
---
# Dataset Card for "hp_saq_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bleugreen/typescript-chunks | ---
task_categories:
- text-classification
- text2text-generation
- summarization
language:
- en
---
# typescript-chunks
A dataset of TypeScript snippets, processed from the typescript subset of [the-stack-smol](https://huggingface.co/datasets/bigcode/the-stack-smol).
# Processing
- Each source file is parsed with the TypeScript AST and queried for 'semantic chunks' of the following types.
```
FunctionDeclaration ---- 8205
ArrowFunction --------- 33890
ClassDeclaration ------- 5325
InterfaceDeclaration -- 12884
EnumDeclaration --------- 518
TypeAliasDeclaration --- 3580
MethodDeclaration ----- 24713
```
- Leading comments are added to the front of `content`
- Removed all chunks over max sequence length (2048)
- Deduplicated / cleaned up
- Generated instructions / summaries with `gpt-3.5-turbo` (in progress)
# Dataset Structure
```python
from datasets import load_dataset
load_dataset("bleugreen/typescript-chunks")
DatasetDict({
train: Dataset({
features: ['type', 'content', 'repo', 'path', 'language'],
num_rows: 89115
})
})
``` |
techiaith/cofnodycynulliad_en-cy | ---
license: other
task_categories:
- translation
- text-classification
- summarization
- sentence-similarity
language:
- en
- cy
pretty_name: Cofnod Y Cynulliad en-cy
size_categories:
- 100K<n<1M
---
# Dataset Card for cofnodycynulliad_en-cy
## Dataset Description
- **Homepage:** https://github.com/techiaith/cofnod-y-cynulliad_dataset
- **Repository:** https://github.com/techiaith/cofnod-y-cynulliad_dataset.git
- **Point of Contact:** techiaith@bangor.ac.uk
### Dataset Summary
This dataset consists of English-Welsh sentence pairs obtained by parsing the data provided from the [Welsh Parliament](https://cofnod.senedd.cymru/) website.
### Supported Tasks and Leaderboards
- translation
- text classification
- sentence similarity
### Languages
- English
- Welsh
## Dataset Structure
### Data Fields
- source
- target
### Data Splits
- train
## Dataset Creation
The dataset was created via an internal pipeline employing DVC and Python.
### Source Data
#### Initial Data Collection and Normalization
Sentences were dropped from the original scrapped sources in the following cases:
- sentence contained too many misspelt words
- sentence length similarity variance too great.
#### Who are the source language producers?
The language data, including source and target language data, is derived from transcripts of the proceedings of the Senedd's Plenary meetings and their translations.
See [here](https://cofnod.senedd.cymru) for information.
### Licensing Information
This dataset's source data is Crown copyright and is licensed under the [Open Government License](https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/). |
vikp/xlcost_filtered_2k | ---
dataset_info:
features:
- name: text
dtype: string
- name: code
dtype: string
- name: quality_prob
dtype: float64
- name: learning_prob
dtype: float64
splits:
- name: train
num_bytes: 1631570.2028233004
num_examples: 1838
download_size: 605641
dataset_size: 1631570.2028233004
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "xlcost_filtered_2k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
datasciathlete/open-ner-english-aihub-korean | ---
dataset_info:
features:
- name: entities
list:
- name: entity_mentions
sequence: string
- name: entity_type
dtype: string
- name: spans
sequence:
sequence: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 105577988.29359013
num_examples: 119091
- name: validation
num_bytes: 26394718.70640988
num_examples: 29773
download_size: 64481279
dataset_size: 131972707.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
open-llm-leaderboard/details_bn22__Nous-Hermes-2-SOLAR-10.7B-MISALIGNED | ---
pretty_name: Evaluation run of bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED](https://huggingface.co/bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bn22__Nous-Hermes-2-SOLAR-10.7B-MISALIGNED\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-04T14:01:10.806510](https://huggingface.co/datasets/open-llm-leaderboard/details_bn22__Nous-Hermes-2-SOLAR-10.7B-MISALIGNED/blob/main/results_2024-01-04T14-01-10.806510.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6656100560822746,\n\
\ \"acc_stderr\": 0.0314404179280337,\n \"acc_norm\": 0.666318457955101,\n\
\ \"acc_norm_stderr\": 0.03208487950716331,\n \"mc1\": 0.40758873929008566,\n\
\ \"mc1_stderr\": 0.017201949234553107,\n \"mc2\": 0.5779007093743166,\n\
\ \"mc2_stderr\": 0.015366654030614126\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6382252559726962,\n \"acc_stderr\": 0.014041957945038076,\n\
\ \"acc_norm\": 0.6825938566552902,\n \"acc_norm_stderr\": 0.013602239088038167\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6691894045010954,\n\
\ \"acc_stderr\": 0.004695434103958515,\n \"acc_norm\": 0.8610834495120494,\n\
\ \"acc_norm_stderr\": 0.003451525868724678\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5703703703703704,\n\
\ \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.5703703703703704,\n\
\ \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7697368421052632,\n \"acc_stderr\": 0.03426059424403165,\n\
\ \"acc_norm\": 0.7697368421052632,\n \"acc_norm_stderr\": 0.03426059424403165\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.72,\n\
\ \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \
\ \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.02845015479411864,\n\
\ \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.02845015479411864\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n\
\ \"acc_stderr\": 0.03716177437566017,\n \"acc_norm\": 0.7291666666666666,\n\
\ \"acc_norm_stderr\": 0.03716177437566017\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\"\
: 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6416184971098265,\n\
\ \"acc_stderr\": 0.03656343653353159,\n \"acc_norm\": 0.6416184971098265,\n\
\ \"acc_norm_stderr\": 0.03656343653353159\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n\
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6042553191489362,\n \"acc_stderr\": 0.03196758697835362,\n\
\ \"acc_norm\": 0.6042553191489362,\n \"acc_norm_stderr\": 0.03196758697835362\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5526315789473685,\n\
\ \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.5526315789473685,\n\
\ \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370333,\n\
\ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370333\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.46825396825396826,\n \"acc_stderr\": 0.0256993528321318,\n \"\
acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.0256993528321318\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8,\n \"acc_stderr\": 0.02275520495954294,\n \"acc_norm\": 0.8,\n\
\ \"acc_norm_stderr\": 0.02275520495954294\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.541871921182266,\n \"acc_stderr\": 0.03505630140785741,\n\
\ \"acc_norm\": 0.541871921182266,\n \"acc_norm_stderr\": 0.03505630140785741\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\"\
: 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8363636363636363,\n \"acc_stderr\": 0.02888787239548795,\n\
\ \"acc_norm\": 0.8363636363636363,\n \"acc_norm_stderr\": 0.02888787239548795\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8737373737373737,\n \"acc_stderr\": 0.02366435940288022,\n \"\
acc_norm\": 0.8737373737373737,\n \"acc_norm_stderr\": 0.02366435940288022\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6692307692307692,\n \"acc_stderr\": 0.02385479568097114,\n \
\ \"acc_norm\": 0.6692307692307692,\n \"acc_norm_stderr\": 0.02385479568097114\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.35185185185185186,\n \"acc_stderr\": 0.02911661760608303,\n \
\ \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.02911661760608303\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.030066761582977934,\n\
\ \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.030066761582977934\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33774834437086093,\n \"acc_stderr\": 0.038615575462551684,\n \"\
acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.038615575462551684\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8513761467889909,\n \"acc_stderr\": 0.015251253773660834,\n \"\
acc_norm\": 0.8513761467889909,\n \"acc_norm_stderr\": 0.015251253773660834\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8480392156862745,\n\
\ \"acc_stderr\": 0.0251956584289318,\n \"acc_norm\": 0.8480392156862745,\n\
\ \"acc_norm_stderr\": 0.0251956584289318\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.869198312236287,\n \"acc_stderr\": 0.021948766059470767,\n\
\ \"acc_norm\": 0.869198312236287,\n \"acc_norm_stderr\": 0.021948766059470767\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7443946188340808,\n\
\ \"acc_stderr\": 0.029275891003969923,\n \"acc_norm\": 0.7443946188340808,\n\
\ \"acc_norm_stderr\": 0.029275891003969923\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097653,\n \"\
acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097653\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n\
\ \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n\
\ \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.020588491316092365,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.020588491316092365\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8263090676883781,\n\
\ \"acc_stderr\": 0.013547415658662252,\n \"acc_norm\": 0.8263090676883781,\n\
\ \"acc_norm_stderr\": 0.013547415658662252\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069356,\n\
\ \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069356\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.358659217877095,\n\
\ \"acc_stderr\": 0.016040454426164478,\n \"acc_norm\": 0.358659217877095,\n\
\ \"acc_norm_stderr\": 0.016040454426164478\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.023805186524888135,\n\
\ \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.023805186524888135\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n\
\ \"acc_stderr\": 0.02567025924218894,\n \"acc_norm\": 0.7138263665594855,\n\
\ \"acc_norm_stderr\": 0.02567025924218894\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7716049382716049,\n \"acc_stderr\": 0.023358211840626267,\n\
\ \"acc_norm\": 0.7716049382716049,\n \"acc_norm_stderr\": 0.023358211840626267\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5212765957446809,\n \"acc_stderr\": 0.029800481645628693,\n \
\ \"acc_norm\": 0.5212765957446809,\n \"acc_norm_stderr\": 0.029800481645628693\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5026075619295959,\n\
\ \"acc_stderr\": 0.012770062445433166,\n \"acc_norm\": 0.5026075619295959,\n\
\ \"acc_norm_stderr\": 0.012770062445433166\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7720588235294118,\n \"acc_stderr\": 0.0254830814680298,\n\
\ \"acc_norm\": 0.7720588235294118,\n \"acc_norm_stderr\": 0.0254830814680298\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6830065359477124,\n \"acc_stderr\": 0.018824219512706207,\n \
\ \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.018824219512706207\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.025607375986579157,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.025607375986579157\n \
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n\
\ \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n\
\ \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466108,\n \
\ \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466108\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n\
\ \"acc_stderr\": 0.03858158940685515,\n \"acc_norm\": 0.5662650602409639,\n\
\ \"acc_norm_stderr\": 0.03858158940685515\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.40758873929008566,\n\
\ \"mc1_stderr\": 0.017201949234553107,\n \"mc2\": 0.5779007093743166,\n\
\ \"mc2_stderr\": 0.015366654030614126\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8342541436464088,\n \"acc_stderr\": 0.010450899545370656\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6914329037149356,\n \
\ \"acc_stderr\": 0.0127230760498159\n }\n}\n```"
repo_url: https://huggingface.co/bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|arc:challenge|25_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|gsm8k|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hellaswag|10_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T14-01-10.806510.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T14-01-10.806510.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- '**/details_harness|winogrande|5_2024-01-04T14-01-10.806510.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-04T14-01-10.806510.parquet'
- config_name: results
data_files:
- split: 2024_01_04T14_01_10.806510
path:
- results_2024-01-04T14-01-10.806510.parquet
- split: latest
path:
- results_2024-01-04T14-01-10.806510.parquet
---
# Dataset Card for Evaluation run of bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED](https://huggingface.co/bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bn22__Nous-Hermes-2-SOLAR-10.7B-MISALIGNED",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-04T14:01:10.806510](https://huggingface.co/datasets/open-llm-leaderboard/details_bn22__Nous-Hermes-2-SOLAR-10.7B-MISALIGNED/blob/main/results_2024-01-04T14-01-10.806510.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6656100560822746,
"acc_stderr": 0.0314404179280337,
"acc_norm": 0.666318457955101,
"acc_norm_stderr": 0.03208487950716331,
"mc1": 0.40758873929008566,
"mc1_stderr": 0.017201949234553107,
"mc2": 0.5779007093743166,
"mc2_stderr": 0.015366654030614126
},
"harness|arc:challenge|25": {
"acc": 0.6382252559726962,
"acc_stderr": 0.014041957945038076,
"acc_norm": 0.6825938566552902,
"acc_norm_stderr": 0.013602239088038167
},
"harness|hellaswag|10": {
"acc": 0.6691894045010954,
"acc_stderr": 0.004695434103958515,
"acc_norm": 0.8610834495120494,
"acc_norm_stderr": 0.003451525868724678
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5703703703703704,
"acc_stderr": 0.04276349494376599,
"acc_norm": 0.5703703703703704,
"acc_norm_stderr": 0.04276349494376599
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7697368421052632,
"acc_stderr": 0.03426059424403165,
"acc_norm": 0.7697368421052632,
"acc_norm_stderr": 0.03426059424403165
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.72,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.72,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.02845015479411864,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.02845015479411864
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.03716177437566017,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.03716177437566017
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6416184971098265,
"acc_stderr": 0.03656343653353159,
"acc_norm": 0.6416184971098265,
"acc_norm_stderr": 0.03656343653353159
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6042553191489362,
"acc_stderr": 0.03196758697835362,
"acc_norm": 0.6042553191489362,
"acc_norm_stderr": 0.03196758697835362
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5526315789473685,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.5526315789473685,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370333,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370333
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.0256993528321318,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.0256993528321318
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8,
"acc_stderr": 0.02275520495954294,
"acc_norm": 0.8,
"acc_norm_stderr": 0.02275520495954294
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.541871921182266,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.541871921182266,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8363636363636363,
"acc_stderr": 0.02888787239548795,
"acc_norm": 0.8363636363636363,
"acc_norm_stderr": 0.02888787239548795
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8737373737373737,
"acc_stderr": 0.02366435940288022,
"acc_norm": 0.8737373737373737,
"acc_norm_stderr": 0.02366435940288022
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6692307692307692,
"acc_stderr": 0.02385479568097114,
"acc_norm": 0.6692307692307692,
"acc_norm_stderr": 0.02385479568097114
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.02911661760608303,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.02911661760608303
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6890756302521008,
"acc_stderr": 0.030066761582977934,
"acc_norm": 0.6890756302521008,
"acc_norm_stderr": 0.030066761582977934
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.038615575462551684,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.038615575462551684
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8513761467889909,
"acc_stderr": 0.015251253773660834,
"acc_norm": 0.8513761467889909,
"acc_norm_stderr": 0.015251253773660834
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.0251956584289318,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.0251956584289318
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.869198312236287,
"acc_stderr": 0.021948766059470767,
"acc_norm": 0.869198312236287,
"acc_norm_stderr": 0.021948766059470767
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7443946188340808,
"acc_stderr": 0.029275891003969923,
"acc_norm": 0.7443946188340808,
"acc_norm_stderr": 0.029275891003969923
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.03520893951097653,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.03520893951097653
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.03408997886857529,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.03408997886857529
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.8155339805825242,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.8155339805825242,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092365,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092365
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8263090676883781,
"acc_stderr": 0.013547415658662252,
"acc_norm": 0.8263090676883781,
"acc_norm_stderr": 0.013547415658662252
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.023618678310069356,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.023618678310069356
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.358659217877095,
"acc_stderr": 0.016040454426164478,
"acc_norm": 0.358659217877095,
"acc_norm_stderr": 0.016040454426164478
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.023805186524888135,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.023805186524888135
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.02567025924218894,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.02567025924218894
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7716049382716049,
"acc_stderr": 0.023358211840626267,
"acc_norm": 0.7716049382716049,
"acc_norm_stderr": 0.023358211840626267
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5212765957446809,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.5212765957446809,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5026075619295959,
"acc_stderr": 0.012770062445433166,
"acc_norm": 0.5026075619295959,
"acc_norm_stderr": 0.012770062445433166
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7720588235294118,
"acc_stderr": 0.0254830814680298,
"acc_norm": 0.7720588235294118,
"acc_norm_stderr": 0.0254830814680298
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.018824219512706207,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.018824219512706207
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8,
"acc_stderr": 0.025607375986579157,
"acc_norm": 0.8,
"acc_norm_stderr": 0.025607375986579157
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.028762349126466108,
"acc_norm": 0.91,
"acc_norm_stderr": 0.028762349126466108
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685515,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685515
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.40758873929008566,
"mc1_stderr": 0.017201949234553107,
"mc2": 0.5779007093743166,
"mc2_stderr": 0.015366654030614126
},
"harness|winogrande|5": {
"acc": 0.8342541436464088,
"acc_stderr": 0.010450899545370656
},
"harness|gsm8k|5": {
"acc": 0.6914329037149356,
"acc_stderr": 0.0127230760498159
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
pietrolesci/wikitoxic | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: validation
path: embedding_all-MiniLM-L12-v2/validation-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: validation
path: embedding_all-mpnet-base-v2/validation-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: validation
path: embedding_multi-qa-mpnet-base-dot-v1/validation-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': non
'1': tox
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 55430581
num_examples: 127656
- name: validation
num_bytes: 13936861
num_examples: 31915
- name: test
num_bytes: 27474227
num_examples: 63978
download_size: 62548640
dataset_size: 96841669
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 197611488
num_examples: 127656
- name: validation
num_bytes: 49404420
num_examples: 31915
- name: test
num_bytes: 99037944
num_examples: 63978
download_size: 484421377
dataset_size: 346053852
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 393691104
num_examples: 127656
- name: validation
num_bytes: 98425860
num_examples: 31915
- name: test
num_bytes: 197308152
num_examples: 63978
download_size: 827919212
dataset_size: 689425116
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 393691104
num_examples: 127656
- name: validation
num_bytes: 98425860
num_examples: 31915
- name: test
num_bytes: 197308152
num_examples: 63978
download_size: 827907964
dataset_size: 689425116
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Toxic Wikipedia Comments
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
tags:
- wikipedia
- toxicity
- toxic comments
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
This is the same dataset as [`OxAISH-AL-LLM/wiki_toxic`](https://huggingface.co/datasets/OxAISH-AL-LLM/wiki_toxic).
The only differences are
1. Addition of a unique identifier, `uid`
1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
- `all-mpnet-base-v2`
- `multi-qa-mpnet-base-dot-v1`
- `all-MiniLM-L12-v2`
1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library |
defog/wikisql_codellama_1000 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 6652069
num_examples: 1000
download_size: 850430
dataset_size: 6652069
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikisql_codellama_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
taufiqdp/Indo4B-Plus | ---
language:
- id
license: mit
---
|
TarlewBR2/VozAlexTOP | ---
license: openrail
---
|
ura-hcmut/PhoMT-dpo | ---
language:
- vi
size_categories:
- 10K<n<100K
configs:
- config_name: envi
data_files:
- split: test
path: PhoMT_envi-dpo.json
- config_name: vien
data_files:
- split: test
path: PhoMT_vien-dpo.json
license: mit
--- |
Zainab984/BP | ---
dataset_info:
features:
- name: Target
dtype: int64
- name: PC
dtype: string
- name: GSHARE
dtype: string
- name: GA table
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 162560000
num_examples: 320000
- name: test
num_bytes: 40640000
num_examples: 80000
download_size: 11803332
dataset_size: 203200000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
zh-tw-llm-dv-dv/zh-tw-llm-dev-sample-ta1k-f6dd50-embeddings-tr_alp-61d3e1-c2048 | ---
dataset_info:
dataset_size: 475784.0
download_size: 146475
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- dtype: string
name: preview
splits:
- name: train
num_bytes: 475784.0
num_examples: 300
---
# zh-tw-llm-dev-sample-ta1k-f6dd50-embeddings-tr_alp-61d3e1-c2048
This dataset is a part of the `zh-tw-llm-dev` project.
* Tokenizer: `zh-tw-llm-dev-sample-tokenizer-a1k-f6dd50`
* Built with: `translations`, `alpaca`
* Rows: `300`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "alpaca"], "preview_length": 64, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100}, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align", "template": "short", "rows_limit": 100}}
``` |
MathDG/DocLayNet-base-law_paragraphs_encoded_ml512 | ---
license: other
license_name: other
license_link: LICENSE
dataset_info:
features:
- name: page_hash
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: normalized_bboxes
sequence:
sequence: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 43899968
num_examples: 1744
- name: validation
num_bytes: 5411980
num_examples: 215
- name: test
num_bytes: 3373048
num_examples: 134
download_size: 1371852
dataset_size: 52684996
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ar_res_reviews | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: ArRestReviews
dataset_info:
features:
- name: polarity
dtype:
class_label:
names:
'0': negative
'1': positive
- name: text
dtype: string
- name: restaurant_id
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 3617085
num_examples: 8364
download_size: 1887029
dataset_size: 3617085
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for ArRestReviews
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Large Arabic Sentiment Analysis Resources](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces)
- **Repository:** [Large Arabic Sentiment Analysis Resources](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces)
- **Paper:** [ Building Large Arabic Multi-domain Resources for Sentiment Analysis](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces/blob/master/Paper%20-%20Building%20Large%20Arabic%20Multi-domain%20Resources%20for%20Sentiment%20Analysis.pdf)
- **Point of Contact:** [hady elsahar](hadyelsahar@gmail.com)
### Dataset Summary
Dataset of 8364 restaurant reviews from qaym.com in Arabic for sentiment analysis
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises of the following:
- "polarity": which is a string value of either 0 or 1 indicating the sentiment around the review
- "text": is the review plain text of a restaurant in Arabic
- "restaurant_id": the restaurant ID on the website
- "user_id": the user ID on the website
example:
```
{
'polarity': 0, # negative
'restaurant_id': '1412',
'text': 'عادي جدا مامن زود',
'user_id': '21294'
}
```
### Data Fields
- "polarity": is a string value of either 0 or 1 indicating the sentiment around the review
- "text": is the review plain text of a restaurant in Arabic
- "restaurant_id": the restaurant ID on the website (string)
- "user_id": the user ID on the website (string)
### Data Splits
The dataset is not split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
Contains 8364 restaurant reviews from qaym.com
#### Who are the source language producers?
From tweeter.
### Annotations
The polarity field provides a label of 1 or -1 pertaining to the sentiment of the review
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Discussion of Social Impact and Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@InProceedings{10.1007/978-3-319-18117-2_2,
author="ElSahar, Hady
and El-Beltagy, Samhaa R.",
editor="Gelbukh, Alexander",
title="Building Large Arabic Multi-domain Resources for Sentiment Analysis",
booktitle="Computational Linguistics and Intelligent Text Processing",
year="2015",
publisher="Springer International Publishing",
address="Cham",
pages="23--34",
isbn="978-3-319-18117-2"
}
### Contributions
Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset. |
hotchpotch/JaCWIR | ---
dataset_info:
- config_name: collection
features:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: link
dtype: string
- name: date
dtype: string
splits:
- name: collection
num_bytes: 310438137
num_examples: 513107
download_size: 209324875
dataset_size: 310438137
- config_name: eval
features:
- name: query
dtype: string
- name: positive
dtype: string
- name: negatives
sequence: string
splits:
- name: eval
num_bytes: 12045094
num_examples: 5000
download_size: 7444454
dataset_size: 12045094
configs:
- config_name: collection
data_files:
- split: collection
path: collection/collection-*
- config_name: eval
data_files:
- split: eval
path: eval/eval-*
license: other
language:
- ja
---
# JaCWIR: Japanese Casual Web IR - 日本語情報検索評価のための小規模でカジュアルなWebタイトルと概要のデータセット
近年、大規模言語モデル(LLM)の台頭により、一般的な日本語を用いた自然な検索クエリで質問するユースケースが増えています。しかしながら、多様なジャンルの Web 記事に対して、ユーザーの質問に適切に答えられるような情報検索システムを評価するための日本語データセットは十分ではありません。
JaCWIR は、5000の質問文と、約50万のWebページのタイトル・Webページ冒頭文もしくは概要(meta descriptionなど)で構成される短いデータの小規模な日本語の情報検索の評価データセットです。質問文は、50万Webページのどれかを元に作成しており、そのデータを質問文の正例としています。
データ元には日本最大級のソーシャルブックマークサービスである、[はてなブックマーク](https://b.hatena.ne.jp/)から収集した RSS 情報を元にフィルタリングし、様々な Web ジャンルの記事のタイトルや概要を含めています。それらの記事からサンプリングしたデータを元に ChatGPT 3.5 で質問文を作成し、日本語の情報検索評価用データセット "**JaCWIR** : Japanese Casual Web IR dataset" を構築しました。なお JaCWIR は「ジャクウィル」と読みます。
データセット自体は HuggingFace で、データセットの評価コード例などは GitHub で公開しています。
- 🤗 [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR)
- HuggingFace で公開している JaCWIR データセットです
- 🛠️ [JaCWIR GitHub リポジトリ](https://github.com/hotchpotch/JaCWIR/)
- GitHub で、📈 [評価用コード](https://github.com/hotchpotch/JaCWIR/tree/main/evaluator) を公開しています。
## JaCWIR の特徴
JaCWIR は、Web の様々なジャンルの記事のタイトルや概要(description)を含む日本語のデータセットです。情報検索のための質問文は ChatGPT 3.5 を利用して作成されており、主に情報検索(IR)タスクでの評価利用を想定しています。
JaCWIR は、考え抜いてさまざまな視点で構築されたきちんとしたデータセットではなく、日本語のさまざまなWeb記事検索に対しての一つの評価指標の目安となるように作成したカジュアルなデータセットです。
データセットに含まれる title と description データは、collection url 先のデータに著作権が帰属します。また、query (質問文)のデータは ChatGPT 3.5 を利用して作成したため、OpenAI の競合となるモデル学習には利用できません。これらのことから、JaCWIR のデータは研究用・非商用として、情報検索の評価にご利用ください。
### 評価タスクと指標
JaCWIR は質問に対して、どの記事を元にその質問が作られたかを探す情報検索タスクです。全てのデータを使えば、50万件からのIRタスクとして評価できます。
また、もっと小規模な100件の IR / Rerank の評価用にと、データセットには各質問に対して正例 (positive) が1つと、BM25と文ベクトルモデルを使って hard-negative マイニングで抽出した誤った負例 (negatives) が99個含まれています。
Rerank タスクの評価指標としては、MAP@10 (Mean Average Precision at 10) を採用しています。MAP は、情報検索システムの評価でよく用いられる指標の一つで、ユーザーにとって重要な上位の結果の適合性を評価することに適しています。具体的には、各質問に対する上位10件の検索結果の適合性を平均することで、システム全体の性能を評価します。MAP を用いることで、単に正解が上位に来ているかだけでなく、上位の結果の順序も考慮した評価が可能になります。
また例として、簡単に評価できるスクリプトを [GitHub の evaluator]([https://github.com/hotchpotch/JaCWIR/tree/main/evaluator](https://github.com/hotchpotch/JaCWIR/tree/main/evaluator)) 以下に置いています。このスクリプトでは、一般的なインターフェイスを備えた検索モデルの評価が可能です。
## Rerank タスク評価
100件の Rerank タスクの評価は以下のとおりです。MAP@10の他に、参考までに HIT_RATE@10 も表示しています。
#### 文ベクトルモデル
| model_names | map@10 | hit_rate@10 |
| :------------------------------------------------------------------------------ | -----: | ----------: |
| [text-embedding-3-small](https://platform.openai.com/docs/guides/embeddings) | 0.8168 | 0.9506 |
| [unsup-simcse-ja-base](https://huggingface.co/cl-nagoya/unsup-simcse-ja-base) | 0.4426 | 0.693 |
| [unsup-simcse-ja-large](https://huggingface.co/cl-nagoya/unsup-simcse-ja-large) | 0.4772 | 0.7188 |
| [sup-simcse-ja-base](https://huggingface.co/cl-nagoya/sup-simcse-ja-base) | 0.5778 | 0.7976 |
| [sup-simcse-ja-large](https://huggingface.co/cl-nagoya/sup-simcse-ja-large) | 0.4741 | 0.7164 |
| [GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.6862 | 0.8706 |
| [fio-base-japanese-v0.1](https://huggingface.co/bclavie/fio-base-japanese-v0.1) | 0.6491 | 0.8544 |
| [bge-m3+dense](https://huggingface.co/BAAI/bge-m3) | 0.8642 | 0.9684 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.8759 | 0.9726 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 0.869 | 0.97 |
#### ColBERT モデル
| model_names | map@10 | hit_rate@10 |
| :-------------------------------------------------------- | -----: | ----------: |
| [JaColBERTv2](https://huggingface.co/bclavie/JaColBERTv2) | 0.9185 | 0.9854 |
| [JaColBERT](https://huggingface.co/bclavie/JaColBERT) | 0.9035 | 0.9772 |
| [bge-m3+colbert](https://huggingface.co/BAAI/bge-m3) | 0.9064 | 0.9802 |
#### CrossEncoder モデル
| model_names | map@10 | hit_rate@10 |
| :----------------------------------------------------------------------------------------------------------------------- | -----: | ----------: |
| [japanese-reranker-cross-encoder-xsmall-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-xsmall-v1) | 0.9376 | 0.9894 |
| [japanese-reranker-cross-encoder-small-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-small-v1) | 0.939 | 0.9908 |
| [japanese-reranker-cross-encoder-base-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-base-v1) | 0.9337 | 0.9878 |
| [japanese-reranker-cross-encoder-large-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-large-v1) | 0.9364 | 0.9816 |
| [japanese-bge-reranker-v2-m3-v1](https://huggingface.co/hotchpotch/japanese-bge-reranker-v2-m3-v1) | 0.9372 | 0.992 |
| [bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) | 0.9343 | 0.9914 |
| [shioriha-large-reranker](https://huggingface.co/cl-nagoya/shioriha-large-reranker) | 0.8458 | 0.9562 |
| [bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 0.4905 | 0.7334 |
| [bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 0.7332 | 0.9314 |
| [cross-encoder-mmarco-mMiniLMv2-L12-H384-v1](https://huggingface.co/corrius/cross-encoder-mmarco-mMiniLMv2-L12-H384-v1) | 0.9211 | 0.984 |
#### その他モデル
| model_names | map@10 | hit_rate@10 |
| :-------------------------------------------------- | -----: | ----------: |
| [bge-m3+all](https://huggingface.co/BAAI/bge-m3) | 0.904 | 0.9796 |
| [bge-m3+sparse](https://huggingface.co/BAAI/bge-m3) | 0.8944 | 0.9778 |
| bm25 | 0.8408 | 0.9528 |
## ライセンス
JaCWIR データセットのライセンスは以下の通りです。
- eval の "query" の質問データ
- [OpenAI のbusiness-terms(事業向け利用規約)]([https://openai.com/policies/business-terms](https://openai.com/policies/business-terms)) に従います
- collection の "title", "description" のデータ
- ライセンスは collection の url に記載されている、Webページの制作者に帰属します
## おわりに〜謝辞
今回、JaCWIR データセットを構築しようと思ったのは、私が wikipedia の文章ばかりを学習させているモデルを作成している際、wikipedia の文章関連のタスクなら高スコアになるが、wikipediaドメイン外の文章になった途端にスコアが大きく落ちることに気づき、wikipediaを使っていないデータで評価したい、と思ったことがきっかけでした。そのため、wikipedia 以外のWebの多様な情報を活用した情報検索タスクを作って評価したい、と作成に着手しました。
結果、wikipedia に最適化しすぎないモデルも作成することができ、多様性や汎化性能の重要さに改めて気づくことができました。
なおデータ収集には、はてなブックマークが提供している RSS を利用させていただきました。このRSSがなければ、Webのさまざまな話題を収集する難易度が全く異なったことでしょう。有益なデータを公開してくださっている、株式会社はてなの皆様・はてなブックマークユーザーの皆様にお礼申し上げます。
---
## Citation
```
@misc{yuichi-tateno-2024-jacwir,
url={[https://huggingface.co/datasets/hotchpotch/JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR)},
title={JaCWIR: Japanese Casual Web IR - 日本語情報検索評価のための小規模でカジュアルなWebタイトルと概要のデータセット},
author={Yuichi Tateno}
}
``` |
maulinnasari/dataset_ext_15_mn | ---
dataset_info:
features:
- name: document
sequence: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 138378746
num_examples: 44972
- name: validation
num_bytes: 16994675
num_examples: 5622
- name: test
num_bytes: 17112258
num_examples: 5622
download_size: 109003001
dataset_size: 172485679
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
CVasNLPExperiments/OK-VQA_test_google_flan_t5_xl_mode_T_A_D_PNP_FILTER_C_Q_rices_ns_5046 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
sequence: string
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_text
num_bytes: 49486823
num_examples: 5046
- name: fewshot_0_clip_tags_ViT_L_14_with_openai_Attributes_ViT_L_14_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_text
num_bytes: 49665385
num_examples: 5046
download_size: 16591454
dataset_size: 99152208
---
# Dataset Card for "OK-VQA_test_google_flan_t5_xl_mode_T_A_D_PNP_FILTER_C_Q_rices_ns_5046"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aureliojafer/twitter_dataset_1710099134 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
splits:
- name: train
num_bytes: 77397
num_examples: 213
download_size: 46239
dataset_size: 77397
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Honcel/SPSVID00391 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: file
dtype: string
splits:
- name: test
num_bytes: 24325613.0
num_examples: 135
download_size: 24325811
dataset_size: 24325613.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
amphora/finqa_suite | ---
configs:
- config_name: mmlu_professional_accounting
data_files:
- split: test
path: data/mmlu_pa_test.csv
- split: dev
path: data/mmlu_professional_accounting_dev.csv
- config_name: mmlu_high_school_microeconomics
data_files:
- split: test
path: data/mmlu_mi_test.csv
- split: dev
path: data/mmlu_high_school_microeconomics_dev.csv
- config_name: mmlu_high_school_macroeconomics
data_files:
- split: test
path: data/mmlu_ma_test.csv
- split: dev
path: data/mmlu_high_school_macroeconomics_dev.csv
- config_name: mmlu_econometrics
data_files:
- split: test
path: data/mmlu_em_test.csv
- split: dev
path: data/mmlu_econometrics_dev.csv
- config_name: finqa
data_files:
- split: test
path: data/finqa_test.csv
- split: dev
path: data/finqa_dev.csv
- config_name: convfinqa
data_files:
- split: test
path: data/convfinqa_test.csv
- split: dev
path: data/convfinqa_valid.csv
license: mit
---
|
yiyic/t5_me5_test_data | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: mtg_en
num_bytes: 48454
num_examples: 500
- name: mtg_de
num_bytes: 57186
num_examples: 500
- name: mtg_es
num_bytes: 51263
num_examples: 500
- name: mtg_fr
num_bytes: 59140
num_examples: 500
- name: nq_en
num_bytes: 306325
num_examples: 500
- name: en_rt
num_bytes: 49483
num_examples: 500
download_size: 406073
dataset_size: 571851
configs:
- config_name: default
data_files:
- split: mtg_en
path: data/mtg_en-*
- split: mtg_de
path: data/mtg_de-*
- split: mtg_es
path: data/mtg_es-*
- split: mtg_fr
path: data/mtg_fr-*
- split: nq_en
path: data/nq_en-*
- split: en_rt
path: data/en_rt-*
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/1a71478a | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1337
dataset_size: 180
---
# Dataset Card for "1a71478a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
boseong/Dataset.llamabs | ---
license: mit
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 22065
num_examples: 63
download_size: 10612
dataset_size: 22065
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
darcy01/autotrain-data-opus-mt-en-zh_hanz | ---
language:
- en
- zh
task_categories:
- translation
---
# AutoTrain Dataset for project: opus-mt-en-zh_hanz
## Dataset Description
This dataset has been automatically processed by AutoTrain for project opus-mt-en-zh_hanz.
### Languages
The BCP-47 code for the dataset's language is en2zh.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"source": "And then I hear something.",
"target": "\u63a5\u7740\u542c\u5230\u4ec0\u4e48\u52a8\u9759\u3002",
"feat_en_length": 26,
"feat_zh_length": 9
},
{
"source": "A ghostly iron whistle blows through the tunnels.",
"target": "\u9b3c\u9b45\u7684\u54e8\u58f0\u5439\u8fc7\u96a7\u9053\u3002",
"feat_en_length": 49,
"feat_zh_length": 10
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)",
"feat_en_length": "Value(dtype='int64', id=None)",
"feat_zh_length": "Value(dtype='int64', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 16350 |
| valid | 4088 |
|
AdapterOcean/langchain-standardized_unified | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 3969713
num_examples: 993
download_size: 1776815
dataset_size: 3969713
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "langchain-standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.