id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
KirbyShrine/bagbean2 | KirbyShrine | 2022-11-29T18:22:57Z | 18 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-11-29T18:22:57Z | 2022-11-29T18:21:35.000Z | 2022-11-29T18:21:35 | ---
license: cc-by-nc-nd-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c9cce3-2280272258 | autoevaluate | 2022-11-29T18:37:53Z | 18 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-29T18:37:53Z | 2022-11-29T18:34:50.000Z | 2022-11-29T18:34:50 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | [
-0.31795793771743774,
-0.2865525484085083,
0.3210281729698181,
0.0391288660466671,
-0.0110769122838974,
-0.10346265882253647,
0.06844202429056168,
-0.4729427099227905,
0.22260674834251404,
0.26562896370887756,
-0.951708197593689,
-0.2855469882488251,
-0.6676428914070129,
-0.118400320410728... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
QonfiAI/ringeko | QonfiAI | 2022-11-29T19:32:58Z | 18 | 0 | null | [
"region:us"
] | 2022-11-29T19:32:58Z | 2022-11-29T19:27:51.000Z | 2022-11-29T19:27:51 | bidi | [
0.04128564894199371,
0.2673620581626892,
0.4387460947036743,
0.24785682559013367,
-0.4130001664161682,
-0.013896824792027473,
0.40363234281539917,
-0.13058245182037354,
0.5144379138946533,
0.4637901782989502,
-0.3608896732330322,
-0.23201884329319,
-0.6519021987915039,
-0.18535155057907104... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-205dcc30-381f-492a-a8e8-fcfbe94b826c-110107 | autoevaluate | 2022-11-29T19:51:54Z | 18 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-29T19:51:54Z | 2022-11-29T19:51:09.000Z | 2022-11-29T19:51:09 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.20361605286598206,
-0.33383142948150635,
0.2989133596420288,
0.17618133127689362,
-0.16354314982891083,
0.03615495190024376,
0.020895475521683693,
-0.39217695593833923,
0.12184618413448334,
0.3618122935295105,
-0.9186378717422485,
-0.21669870615005493,
-0.770520806312561,
-0.01348786149... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-7a996eab-fd9f-4453-b298-d76d6134fbe7-111108 | autoevaluate | 2022-11-29T20:05:45Z | 18 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-29T20:05:45Z | 2022-11-29T20:05:07.000Z | 2022-11-29T20:05:07 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.20361605286598206,
-0.33383142948150635,
0.2989133596420288,
0.17618133127689362,
-0.16354314982891083,
0.03615495190024376,
0.020895475521683693,
-0.39217695593833923,
0.12184618413448334,
0.3618122935295105,
-0.9186378717422485,
-0.21669870615005493,
-0.770520806312561,
-0.01348786149... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-318497e7-9d2a-403c-be28-ce4ff065ca1d-112109 | autoevaluate | 2022-11-29T20:08:17Z | 18 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-29T20:08:17Z | 2022-11-29T20:07:42.000Z | 2022-11-29T20:07:42 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/zero-shot-classification-sample
eval_info:
task: text_zero_shot_classification
model: autoevaluate/zero-shot-classification
metrics: []
dataset_name: autoevaluate/zero-shot-classification-sample
dataset_config: autoevaluate--zero-shot-classification-sample
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.27314117550849915,
-0.3553648591041565,
0.3160736560821533,
0.02020207606256008,
-0.04853355512022972,
-0.06055496260523796,
0.09194678068161011,
-0.4492080807685852,
0.1300031542778015,
0.3204926550388336,
-0.9075667858123779,
-0.3593883812427521,
-0.7324904203414917,
-0.02573315240442... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KirbyShrine/wally_bagbean | KirbyShrine | 2022-11-30T00:05:00Z | 18 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-11-30T00:05:00Z | 2022-11-30T00:03:35.000Z | 2022-11-30T00:03:35 | ---
license: cc-by-nc-nd-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tristan/olm-test-no-dedup | Tristan | 2022-11-30T00:03:52Z | 18 | 0 | null | [
"region:us"
] | 2022-11-30T00:03:52Z | 2022-11-30T00:03:44.000Z | 2022-11-30T00:03:44 | ---
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
- name: crawl_timestamp
dtype: float64
splits:
- name: train
num_bytes: 249659214.0
num_examples: 46032
download_size: 149319674
dataset_size: 249659214.0
---
# Dataset Card for "olm-test-no-dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6976850032806396,
-0.5703534483909607,
0.20733346045017242,
0.003782860469073057,
-0.13476324081420898,
-0.277953565120697,
0.33465397357940674,
0.034323375672101974,
0.5658709406852722,
0.6971002221107483,
-0.7166764140129089,
-0.8502097129821777,
-0.5177296996116638,
-0.16560207307338... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fathyshalab/work | fathyshalab | 2022-11-30T12:31:51Z | 18 | 0 | null | [
"region:us"
] | 2022-11-30T12:31:51Z | 2022-11-30T08:21:17.000Z | 2022-11-30T08:21:17 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xusenlin/duie | xusenlin | 2022-12-07T14:49:54Z | 18 | 0 | null | [
"region:us"
] | 2022-12-07T14:49:54Z | 2022-12-07T14:41:25.000Z | 2022-12-07T14:41:25 | ---
dataset_info:
features:
- name: text
dtype: string
- name: spo_list
list:
- name: predicate
dtype: string
- name: object_type
dtype: string
- name: subject_type
dtype: string
- name: object
dtype: string
- name: subject
dtype: string
splits:
- name: train
num_bytes: 51849478
num_examples: 172983
- name: validation
num_bytes: 6512116
num_examples: 21626
download_size: 32568292
dataset_size: 58361594
---
# DuIE 关系抽取数据集
字段说明
+ `text`: 文本
+ `spo_list`: 文本中包含的关系三元组
+ `subject`: 头实体(主语)
+ `subject_type`: 头实体(主语)的类型
+ `object`: 尾实体(主语)
+ `object_type`: 尾实体(主语)的类型
+ `predicate`: 关系
| [
-0.3376312851905823,
-0.905718982219696,
0.2901664972305298,
0.5807656049728394,
-0.7985300421714783,
-0.007199310697615147,
0.208861842751503,
0.11181683838367462,
0.5810423493385315,
0.8539934754371643,
-0.28032001852989197,
-0.6173608303070068,
-1.0599828958511353,
0.3338524401187897,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qag_esquad | lmqg | 2022-12-18T08:01:13Z | 18 | 1 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_esquad",
"language:es",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-18T08:01:13Z | 2022-12-18T07:06:04.000Z | 2022-12-18T07:06:04 | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: es
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_esquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_esquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the ESQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Spanish (es)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""4 Minutes" fue lanzado como el primer sencillo del álbum y alcanzó el número tres en el Billboard Hot 100. Fue el 37º hit top-ten de Madonna en la lista, empujando a Madonna más allá de Elvis Presley como el artista con más éxitos entre los diez primeros. En el Reino Unido mantuvo su récord de más sencillos número uno para una artista femenina; "4 Minutes" se convierte en su decimotercera. En el 23 Japan Gold Disc Awards, Madonna recibió su quinto trofeo de Artista del Año de la Recording Industry Association of Japan, la mayor cantidad para cualquier artista. Para promover aún más el álbum, Madonna se embarcó en el Sticky & Sweet Tour; Su primera gran empresa con Live Nation. Con una recaudación de $280 millones, se convirtió en la gira más taquillera de un artista en solitario entonces, superando el récord anterior que Madonna estableció con la gira Confessions Tour; Más tarde fue superado por The Wall Live de Roger Waters. Se amplió al año siguiente, añadiendo nuevas fechas europeas, y después de que terminó, la recaudación total fue de $408 millones.",
"questions": [ "¿Cuál es el nombre de la primera gira con Live Nation?", "4 minutos se convirtió en la canción número uno de Madonna en el Reino Unido.", "¿Cuál sencillo fue lanzado como el primer sencillo del álbum?", "¿Cuánto recaudaron Stick y Sweet Tour?", "Madonna superó a qué artista con más éxitos entre los diez primeros." ],
"answers": [ "Sticky & Sweet Tour", "decimotercera", "\"4 Minute", "$280 millones,", "Elvis Presley" ]
"questions_answers": "question: ¿Cuál es el nombre de la primera gira con Live Nation?, answer: Sticky & Sweet Tour | question: 4 minutos se convirtió en la canción número uno de Madonna en el Reino Unido., answer: decimotercera | question: ¿Cuál sencillo fue lanzado como el primer sencillo del álbum?, answer: "4 Minute | question: ¿Cuánto recaudaron Stick y Sweet Tour?, answer: $280 millones, | question: Madonna superó a qué artista con más éxitos entre los diez primeros., answer: Elvis Presley"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|18829| 2067 | 8234|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.5196108222007751,
-0.9293317794799805,
0.17307668924331665,
0.09359218925237656,
-0.2615346908569336,
-0.033087458461523056,
-0.10173437744379044,
-0.4485214352607727,
0.4134872555732727,
0.5213621854782104,
-0.822026252746582,
-0.584435224533081,
-0.18948523700237274,
0.143217906355857... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TrpFrog/trpfrog-icons | TrpFrog | 2022-12-30T04:37:09Z | 18 | 1 | null | [
"license:mit",
"region:us"
] | 2022-12-30T04:37:09Z | 2022-12-29T17:00:46.000Z | 2022-12-29T17:00:46 | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': green
'1': others
splits:
- name: train
num_bytes: 3106612.0
num_examples: 50
download_size: 2598455
dataset_size: 3106612.0
---

# trpfrog-icons Dataset
This is a dataset of [TrpFrog](https://trpfrog.net)'s icons. By the way, what do you use this for? 🤔
## How to use
```py
from datasets import load_dataset
dataset = load_dataset("TrpFrog/trpfrog-icons")
```
```py
# print all data
for data in dataset["train"]:
print(data)
# remove not green icons
dataset = dataset.filter(lambda x: x["label"] == 0)
```
## License
MIT License | [
-0.4783337116241455,
-0.0817222073674202,
-0.22816509008407593,
0.19708223640918732,
-0.40651118755340576,
0.19028426706790924,
0.19549517333507538,
-0.2233818769454956,
0.580498218536377,
0.4347051680088043,
-0.7503243684768677,
-0.6061417460441589,
-0.3580567538738251,
0.2617158889770508... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
and-effect/mdk_gov_data_titles_clf | and-effect | 2023-05-25T12:43:42Z | 18 | 1 | null | [
"task_categories:text-classification",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:de",
"license:cc-by-4.0",
"region:us"
] | 2023-05-25T12:43:42Z | 2023-01-04T16:20:31.000Z | 2023-01-04T16:20:31 | ---
annotations_creators: crowdsourced
language_creators: other
language: de
multilinguality: monolingual
size_categories:
- 1K<n<10K
source_datasets: extended
task_categories:
- text-classification
pretty_name: GOVDATA dataset titles labelled
license: cc-by-4.0
---
# Dataset Card for MDK
This dataset was created as part of the [Bertelsmann Foundation's](https://www.bertelsmann-stiftung.de/de/startseite)
[Musterdatenkatalog (MDK)]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") project. The MDK provides an overview of Open Data in municipalities in Germany. It is intended to help municipalities in Germany, as well as data analysts and journalists, to get an overview of the topics and the extent to which cities have already published data sets.
## Dataset Description
### Dataset Summary
The dataset is an annotated corpus of 1258 records based on the metadata of the datasets from [GOVDATA](https://www.govdata.de/). GovData is a data portal that aims to make cities' data available in a standardized way.
The annotation maps the titles of the datasets to a taxonomy containing categories such as 'Verkehr - KFZ - Messung' or 'Abfallwirtschaft - Abfallkalender'. Through the assignment the names of the data sets can be normalized and grouped. In total, the taxonomy consists 250 categories. Each category is divided into two levels:
- Level 1: "Thema" (topic)

- Level 2: "Bezeichnung" (label).
The first dash divides the levels. For example:

You can find an interactive view of the taxonomy with all labels [here](https://huggingface.co/spaces/and-effect/Musterdatenkatalog).
The repository contains a small and a large version of the data. The small version is for testing purposes only. The large data set contains all 1258 entries. The large and small datasets are split into a training and a testing dataset. In addition, the large dataset folder contains of a validation dataset that has been annotated separately. The validation dataset is an additional dataset that we created for the evaluation of the algorithm. It also consists of data from GOVDATA and has the same structure as the test and training data set.
### Languages
The language data is German.
## Dataset Structure
### Data Fields
| dataset | size |
|-----|-----|
| small/train | 18.96 KB |
| small/test | 6.13 KB |
| large/train | 517.77 KB |
| large/test | 118.66 KB |
An example of looks as follows:
```json
{
"doc_id": "a063d3b7-4c09-421e-9849-073dc8939e76",
"title": "Dienstleistungen Alphabetisch sortiert April 2019",
"description": "CSV-Datei mit allen Dienstleistungen der Kreisverwaltung Kleve. Sortiert nach AlphabetStand 01.04.2019",
"labels_name": "Sonstiges - Sonstiges",
"labels": 166
}
```
The data fields are the same among all splits:
- doc_id (uuid): identifier for each document
- title (str): dataset title from GOVDATA
- description (str): description of the dataset
- labels_name (str): annotation with labels from taxonomy
- labels (int): labels indexed from 0 to 250
### Data Splits
| dataset_name | dataset_splits | train_size | test_size | validation_size
|-----|-----|-----|-----|-----|
| dataset_large | train, test, validation | 1009 | 249 | 101
| dataset_small | train, test | 37 | 13 | None
## Dataset Creation
The dataset was created through multiple manual annotation rounds.
### Source Data
The data comes from [GOVDATA](https://www.govdata.de/), an open data portal of Germany. It aims to provide central access to administrative data from the federal, state and local governments. Their aim is to make data available in one place and thus easier to use. The data available is structured in 13 categories ranging from finance, to international topics, health, education and science and technology. [GOVDATA](https://www.govdata.de/) offers a [CKAN API](https://ckan.govdata.de/) to make requests and provides metadata for each data entry.
#### Initial Data Collection and Normalization
Several sources were used for the annotation process. A sample was collected from [GOVDATA](https://www.govdata.de/) with actual datasets. For the sample, 50 records were drawn for each group. Additional samples are from the previous version of the [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) that contain older data from [GOVDATA](https://www.govdata.de/). Some of the datasets from the old [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) already contained an annotation, but since the taxonomy is not the same, the data were re-annotated. A sample was drawn from each source (randomly and by manual selection), resulting in a total of 1258 titles.
### Annotations
#### Annotation process
The data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa.
The following table shows the results of the of the annotations:
| | **Cohens Kappa** | **Number of Annotators** | **Number of Documents** |
| ------------------ | :--------------: | ------------------------ | ----------------------- |
| **Test Round** | .77 | 6 | 50 |
| **Round 1** | .41 | 2 | 120 |
| **Round 2** | .76 | 4 | 480 |
| **Round 3** | .71 | 3 | 420 |
| **Round 4** | .87 | 2 | 416 |
| **Validation set** | - | 1 | 177 |
In addition, a validation set was generated by the dataset curators.
#### Who are the annotators?
Annotators are all employees from [&effect data solutions GmbH](https://www.and-effect.com/). The taxonomy as well as rules and problems in the assignment of datasets were discussed and debated in advance of the development of the taxonomy and the annotation in two workshops with experts and representatives of the open data community and local governments as well as with the project members of the [Musterdatenkatalog]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") from the Bertelsmann Foundation. On this basis, the [&effect](https://www.and-effect.com/) employees were instructed in the annotation by the curators of the datasets.
## Considerations for Using the Data
The dataset for the annotation process was generated by sampling from [GOVDATA](https://www.govdata.de/) and data previously collected from GOVDATA. The data on GOVDATA is continuously updated and data can get deleted. Thus, there is no guarantee that data entries included here will still be available.
### Social Impact of Dataset
Since 2017, the German government has been promoting systematic and free access to public administration data with first laws on open data in municipalities. In this way, a contribution is aimed at the development of a [knowledge society] (https://www.verwaltung-innovativ.de/DE/Startseite/startseite_node.html). The categorization of open data of cities in a standardized and detailed taxonomy supports this process of making data of municipalities freely, openly and structured accessible.
### Discussion of Biases (non-ethical)
The data was mainly sampled at random from the categories available on GOVDATA. Although all categories were sampled there is still some imbalance in the data. For example: entries for the concept 'Raumordnung, Raumplanung und Raumentwicklung - Bebauungsplan' make up the majority class. Although manual selection of data was also used for not all previous concepts data entries was found. However, for 95% of concepts at least one data entry is available.
## Additional Information
### Dataset Curators
Friederike Bauer
Rahkakavee Baskaran
### Licensing Information
CC BY 4.0 | [
-0.7849469780921936,
-0.5584766268730164,
0.3554081618785858,
-0.01546116080135107,
-0.4070211350917816,
-0.39588361978530884,
-0.20837268233299255,
-0.41956841945648193,
0.5220338702201843,
0.605545699596405,
-0.4343098998069763,
-0.9473475217819214,
-0.6136234402656555,
0.19801065325737,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nbtpj/DUC2004 | nbtpj | 2023-01-09T10:56:59Z | 18 | 0 | null | [
"region:us"
] | 2023-01-09T10:56:59Z | 2023-01-09T10:47:36.000Z | 2023-01-09T10:47:36 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rcds/swiss_legislation | rcds | 2023-07-20T07:36:07Z | 18 | 5 | null | [
"task_categories:text-classification",
"task_categories:translation",
"size_categories:100K<n<1M",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"region:us"
] | 2023-07-20T07:36:07Z | 2023-01-22T20:02:28.000Z | 2023-01-22T20:02:28 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
- translation
language:
- de
- fr
- it
pretty_name: Swiss Legislation
size_categories:
- 100K<n<1M
---
# Dataset Card for Swiss Legislation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Legislation is a multilingual, diachronic dataset of 36K Swiss laws. This dataset is part of a challenging Information Retreival task.
### Supported Tasks and Leaderboards
### Languages
The total number of texts in the dataset is 35,698. The dataset is saved in _lexfind_v2.jsonl_ format.
Switzerland has four official languages German, French, Italian and Romanch with some additional English laws being represenated. Laws are written by legal experts.
36K & 18K & 11K & 6K & 534 & 207
| Language | Subset | Number of Documents |
|------------|------------|----------------------|
| German | **de** | 18K |
| French | **fr** | 11K |
| Italian | **it** | 6K |
| Romanch | **rm** | 534 |
| English | **en** | 207 |
## Dataset Structure
### Data Fields
Each entry in the dataset is a dictionary with the following keys:
- `canton`: the canton of origin of the legislation
- example: "ag"
- `language`: the language of the legislation
- example: "de"
- `uuid`: a unique identifier for the legislation
- example: "ec312f57-05fe-4552-ba50-8c9c269e0f3b"
- `title`: the title of the legislation
- example: "Gesetz über die Geoinformation im Kanton Aargau"
- `short`: a short description of the legislation
- example: "Kantonales Geoinformationsgesetz"
- `abbreviation`: an abbreviation for the legislation
- example: "KGeoIG"
- `sr_number`: a reference number for the legislation
- example: "740.100"
- `is_active`: whether the legislation is currently in force
- example: true
- `version_active_since`: the date since when the legislation's current version is active
- example: "2021-09-01"
- `family_active_since`: the date since when the legislation's current version's family is active
- example: "2011-05-24"
- `version_inactive_since`: the date since when the legislation's current version is inactive
- example: null
- `version_found_at`: the date the legislation's current version was found
- example: "2021-09-01"
- `pdf_url`: a link to the legislation's pdf
- example: "https://www.lexfind.ch/tol/1557/de"
- `html_url`: a link to the legislation's html
- example: "https://gesetzessammlungen.ag.ch/app/de/texts_of_law/740.100")_
- `pdf_content`: the legislation's pdf content
- example: "740.100 - Gesetz über..."
- `html_content`: the legislation's html content
- example: ""
- `changes`: a list of changes made to the legislation
- example: []
- `history`: a list of the legislation's history
- example: []
- `quotes`: a list of quotes from the legislation
- example: []
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
1. 'ch': Switzerland (Federal) - 15840
2. 'fr': Fribourg - 1633
3. 'be': Bern - 1344
4. 'vs': Valais - 1328
5. 'gr': Graubünden - 1205
6. 'ne': Neuchâtel - 1115
7. 'zh': Zurich - 974
8. 'bs': Basel-Stadt - 899
9. 'bl': Basel-Landschaft - 863
10. 'vd': Vaud - 870
11. 'ge': Geneva - 837
12. 'sg': St. Gallen - 764
13. 'ju': Jura - 804
14. 'zg': Zug - 632
15. 'ti': Ticino - 627
16. 'lu': Lucerne - 584
17. 'so': Solothurn - 547
18. 'ow': Obwalden - 513
19. 'ik': Interkantonal - 510
20. 'sh': Schaffhausen - 469
21. 'gl': Glarus - 467
22. 'tg': Thurgau - 453
23. 'sz': Schwyz - 423
24. 'ai': Appenzell Innerrhoden - 416
25. 'ag': Aargau - 483
26. 'ar': Appenzell Ausserrhoden - 330
27. 'nw': Nidwalden - 401
28. 'ur': Uri - 367
29.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.40366819500923157,
-0.6739912033081055,
0.34549298882484436,
0.17364086210727692,
-0.43654417991638184,
-0.2696230113506317,
-0.22011689841747284,
-0.17517706751823425,
0.3661225140094757,
0.5898870825767517,
-0.686926007270813,
-1.0158302783966064,
-0.6204786896705627,
0.21619497239589... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
carlosejimenez/wikipedia-20220301.en-0.005-validation | carlosejimenez | 2023-01-28T22:28:00Z | 18 | 0 | null | [
"region:us"
] | 2023-01-28T22:28:00Z | 2023-01-28T06:15:58.000Z | 2023-01-28T06:15:58 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rfernand/basic_sentence_transforms | rfernand | 2023-05-17T18:33:56Z | 18 | 0 | null | [
"task_categories:text2text-generation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:other",
"s... | 2023-05-17T18:33:56Z | 2023-01-28T18:45:06.000Z | 2023-01-28T18:45:06 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- other
multilinguality:
- monolingual
pretty_name: Active/Passive/Logical Transforms
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
tags:
- struct2struct
- tree2tree
task_categories:
- text2text-generation
task_ids: []
---
# Dataset Card for Active/Passive/Logical Transforms
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Dataset Subsets (Tasks)](#data-tasks)
- [Dataset Splits](#data-splits)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Roland Fernandez](mailto:rfernand@microsoft.com)
### Dataset Summary
This dataset is a synthetic dataset containing structure-to-structure transformation tasks between
English sentences in 3 forms: active, passive, and logical. The dataset also includes several
tree-transformation diagnostic/warm-up tasks.
### Supported Tasks and Leaderboards
[TBD]
### Languages
All data is in English.
## Dataset Structure
The dataset consists of several subsets, or tasks. Each task contains a train split, a validation split, and a
test split, with most tasks also containing two out-of-distruction splits (one for new adjectives and one for longer adjective phrases).
Each sample in a split contains a source string, a target string, and 0-2 annotation strings.
### Dataset Subsets (Tasks)
The dataset consists of diagnostic/warm-up tasks and core tasks. The core tasks represent the translation of English sentences between the active, passive, and logical forms.
The 12 diagnostic/warm-up tasks are:
```
- car_cdr_cons (small phrase translation tasks that require only: CAR, CDR, or CAR+CDR+CONS operations)
- car_cdr_cons_tuc (same task as car_cdr_cons, but requires mapping lowercase fillers to their uppercase tokens)
- car_cdr_rcons (same task as car_cdr_cons, but the CONS samples have their left/right children swapped)
- car_cdr_rcons_tuc (same task as car_cdr_rcons, but requires mapping lowercase fillers to their uppercase tokens)
- car_cdr_seq (each samples requires 1-4 combinations of CAR and CDR, as identified by the root filler oken)
- car_cdr_seq_40k (same task as car_cdr_seq, but train samples increased from 10K to 40K)
- car_cdr_seq_tuc (same task as car_cdr_seq, but requires mapping lowercase fillers to their uppercase tokens)
- car_cdr_seq_40k_tuc (same task as car_cdr_seq_tuc, but train samples increased from 10K to 40K)
- car_cdr_seq_path (similiar to car_cdr_seq, but each needed operation in represented as a node in the left child of the root)
- car_cdr_seq_path_40k (same task as car_cdr_seq_path, but train samples increased from 10K to 40K)
- car_cdr_seq_path_40k_tuc (same task as car_cdr_seq_path_40k, but requires mapping lowercase fillers to their uppercase tokens)
- car_cdr_seq_path_tuc (same task as car_cdr_seq_path, but requires mapping lowercase fillers to their uppercase tokens)
```
There are 22 core tasks are:
```
- active_active_stb (active sentence translation, from sentence to parenthesized tree form, both directions)
- active_active_stb_40k (same task as active_active_stb, but train samples increased from 10K to 40K)
- active_logical_ssb (active to logical sentence translation, in both directions)
- active_logical_ssb_40k (same task as active_logical_ssb, but train samples increased from 10K to 40K)
- active_logical_ttb (active to logical tree translation, in both directions)
- active_logical_ttb_40k (same task as active_logical_ttb, but train samples increased from 10K to 40K)
- active_passive_ssb (active to passive sentence translation, in both directions)
- active_passive_ssb_40k (same task as active_passive_ssb, but train samples increased from 10K to 40K)
- active_passive_ttb (active to passive tree translation, in both directions)
- active_passive_ttb_40k (same task as active_passive_ttb, but train samples increased from 10K to 40K)
- actpass_logical_ss (mixture of active to logical and passive to logical sentence translations, single direction)
- actpass_logical_ss_40k (same task as actpass_logical_ss, but train samples increased from 10K to 40K)
- actpass_logical_tt (mixture of active to logical and passive to logical tree translations, single direction)
- actpass_logical_tt_40k (same task as actpass_logical_tt, but train samples increased from 10K to 40K)
- logical_logical_stb (logical form sentence translation, from sentence to parenthesized tree form, both directions)
- logical_logical_stb_40k (same task as logical_logical_stb, but train samples increased from 10K to 40K)
- passive_logical_ssb (passive to logical sentence translation, in both directions)
- passive_logical_ssb_40k (same task as passive_logical_ssb, but train samples increased from 10K to 40K)
- passive_logical_ttb (passive to logical tree translation, in both directions)
- passive_logical_ttb_40k (same task as passive_logical_ttb, but train samples increased from 10K to 40K)
- passive_passive_stb (passive sentence translation, from sentence to parenthesized tree form, both directions)
- passive_passive_stb_40k (same task as passive_passive_stb, but train samples increased from 10K to 40K)
```
### Data Splits
Most tasks have the following splits:
- train
- validation
- test
- ood_new
- ood_long
- ood_all
Here is a table showing how the number of examples varies by split (for most tasks):
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| train | 10,000 |
| validation | 1,250 |
| test | 1,250 |
| ood_new | 1,250 |
| ood_long | 1,250 |
| ood_all | 1,250 |
### Data Instances
For each sample, there is source and target string. Source and target string are either plain text, or a parenthesized
version of a tree, depending on the task.
Here is an example from the *train* split of the *active_passive_ttb* task:
```
{
'source': '( S ( NP ( DET his ) ( AP ( N cat ) ) ) ( VP ( V discovered ) ( NP ( DET the ) ( AP ( ADJ blue ) ( AP ( N priest ) ) ) ) ) )',
'target': '( S ( NP ( DET the ) ( AP ( ADJ blue ) ( AP ( N priest ) ) ) ) ( VP ( AUXPS was ) ( VPPS ( V discovered ) ( PPPS ( PPS by ) ( NP ( DET his ) ( AP ( N cat ) ) ) ) ) ) )',
'direction': 'forward'
}
```
### Data Fields
- `source`: the string denoting the sequence or tree structure to be translated
- `target`: the string denoting the gold (aka label) sequence or tree structure
Optional annotation fields (their presence varies by task):
- `direction`: describes the direction of the translation (forward, backward), relative to the task name
- `count` : a string denoting the count of symbolic operations needed (e.g., "s3") to translate the source to the target
- `class` : a string denoting the type of translation needed
## Dataset Creation
### Curation Rationale
We wanted a dataset comprised of relatively simple English active/passive/logical form translations, where we could focus
on two types of out of distribution generalization: longer source sequences and new adjectives.
### Source Data
[N/A]
#### Initial Data Collection and Normalization
[N/A]
#### Who are the source language producers?
The dataset by generated from templates designed by Paul Smolensky and Roland Fernandez.
### Annotations
Besides the source and target structured sequences, some of the subsets (tasks) contain 1-2 additional columns that
describe the category and tree depth of each sample.
#### Annotation process
The annotation columns were generated from the each sample template and source sequence.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No names or other sensitive information are included in the data.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop models that can translated structured data from one form to another, in a
way that generalizes to out of distribution adjective values and lengths.
### Discussion of Biases
[TBD]
### Other Known Limitations
[TBD]
## Additional Information
The internal name of this dataset is nc_pat.
### Dataset Curators
The dataset by generated from templates designed by Paul Smolensky and Roland Fernandez.
### Licensing Information
This dataset is released under the [Permissive 2.0 license](https://cdla.dev/permissive-2-0/).
### Citation Information
[TBD]
### Contributions
Thanks to [The Neurocompositional AI group at Microsoft Research](https://www.microsoft.com/en-us/research/project/neurocompositional-ai/) for creating and adding this dataset.
| [
-0.21002332866191864,
-0.6883291006088257,
0.15473335981369019,
0.32028964161872864,
-0.29301080107688904,
-0.032836463302373886,
-0.43760108947753906,
-0.5239579081535339,
0.1894533485174179,
0.24234086275100708,
-0.7543777823448181,
-0.6320789456367493,
-0.6122373938560486,
0.40596121549... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LLukas22/cqadupstack | LLukas22 | 2023-04-30T19:24:35Z | 18 | 0 | null | [
"task_categories:sentence-similarity",
"task_categories:feature-extraction",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-04-30T19:24:35Z | 2023-01-31T14:18:36.000Z | 2023-01-31T14:18:36 | ---
license: apache-2.0
task_categories:
- sentence-similarity
- feature-extraction
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for "cqadupstack"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [http://nlp.cis.unimelb.edu.au/resources/cqadupstack/](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
### Dataset Summary
This is a preprocessed version of cqadupstack, to make it easily consumable via huggingface. The original dataset can be found [here](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/).
CQADupStack is a benchmark dataset for community question-answering (cQA) research. It contains threads from twelve StackExchange1 subforums, annotated with duplicate question information and comes with pre-defined training, development, and test splits, both for retrieval and classification experiments.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"question": "Very often, when some unknown company is calling me, in couple of seconds I see its name and logo on standard ...",
"answer": "You didn't explicitely mention it, but from the context I assume you're using a device with Android 4.4 (Kitkat). With that ...",
"title": "Why Dialer shows contact name and image, when contact is not in my address book?",
"forum_tag": "android"
}
```
### Data Fields
The data fields are the same among all splits.
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `title`: a `string` feature.
- `forum_tag`: a categorical `string` feature.
## Additional Information
### Licensing Information
This dataset is distributed under the Apache 2.0 licence.
| [
-0.7240636944770813,
-0.5927499532699585,
0.12953025102615356,
0.24258151650428772,
-0.3987646698951721,
0.2499789297580719,
0.10160116106271744,
-0.4095574915409088,
0.4349408745765686,
0.7246477603912354,
-0.7779273986816406,
-0.46310341358184814,
-0.2695496380329132,
0.23340381681919098... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kamtera/Persian-conversational-dataset | Kamtera | 2023-04-04T08:19:27Z | 18 | 0 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"language:fa",
"license:apache-2.0",
"region:us"
] | 2023-04-04T08:19:27Z | 2023-02-05T10:12:23.000Z | 2023-02-05T10:12:23 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- fa
pretty_name: persianConversation
---
persianConversation | [
-0.23909330368041992,
-0.4260484278202057,
0.813269853591919,
0.748132050037384,
-0.18265299499034882,
0.603965163230896,
-0.27701258659362793,
-0.20449216663837433,
0.6650791168212891,
0.4922041893005371,
-0.5038063526153564,
-0.2105478048324585,
-0.4637730121612549,
-0.19360922276973724,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/Airbus-Wind-Turbines-Patches | jonathan-roberts1 | 2023-03-31T15:23:50Z | 18 | 1 | null | [
"license:other",
"region:us"
] | 2023-03-31T15:23:50Z | 2023-02-17T15:56:30.000Z | 2023-02-17T15:56:30 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': no wind turbine
'1': wind turbine
splits:
- name: train
num_bytes: 169946184.648
num_examples: 71504
download_size: 147716132
dataset_size: 169946184.648
license: other
---
# Dataset Card for "Airbus-Wind-Turbines-Patches"
## Dataset Description
- **Paper** [Airbus Wind Turbine Patches](https://www.kaggle.com/datasets/airbusgeo/airbus-wind-turbines-patches)
- **Split** Validation
## Split Information
This HuggingFace dataset repository contains just the Validation split.
### Licensing Information
[CC BY-NC-SA 4.0](https://www.kaggle.com/datasets/airbusgeo/airbus-wind-turbines-patches)
## Citation Information
[Airbus Wind Turbine Patches](https://www.kaggle.com/datasets/airbusgeo/airbus-wind-turbines-patches)
```
@misc{kaggle_awtp,
author = {Airbus DS GEO S.A.},
title = {Airbus Wind Turbine Patches},
howpublished = {\url{https://www.kaggle.com/datasets/airbusgeo/airbus-wind-turbines-patches}},
year = {2021},
version = {1.0}
}
``` | [
-0.685195803642273,
-0.3991360366344452,
0.23144590854644775,
0.6986255049705505,
-0.49470824003219604,
0.11501924693584442,
0.04173482954502106,
-0.42315030097961426,
0.49988633394241333,
0.3764970302581787,
-0.783840000629425,
-0.5375062227249146,
-0.4624220132827759,
0.01371462177485227... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nanaaaa/emotion_chinese_english | nanaaaa | 2023-03-05T10:36:14Z | 18 | 6 | null | [
"task_categories:text-classification",
"language:zh",
"language:en",
"doi:10.57967/hf/1019",
"region:us"
] | 2023-03-05T10:36:14Z | 2023-02-20T13:24:36.000Z | 2023-02-20T13:24:36 | ---
task_categories:
- text-classification
language:
- zh
- en
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
suolyer/ocnli | suolyer | 2023-02-22T11:10:11Z | 18 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-02-22T11:10:11Z | 2023-02-22T08:54:19.000Z | 2023-02-22T08:54:19 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OpenBioML/chebi_20 | OpenBioML | 2023-03-03T22:27:47Z | 18 | 0 | null | [
"region:us"
] | 2023-03-03T22:27:47Z | 2023-03-03T22:18:18.000Z | 2023-03-03T22:18:18 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Hundred9/Duaaii_6 | Hundred9 | 2023-03-07T16:00:41Z | 18 | 0 | null | [
"region:us"
] | 2023-03-07T16:00:41Z | 2023-03-07T16:00:38.000Z | 2023-03-07T16:00:38 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
splits:
- name: train
num_bytes: 4878651.0
num_examples: 647
download_size: 4842183
dataset_size: 4878651.0
---
# Dataset Card for "Duaaii_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6179253458976746,
-0.19956573843955994,
0.2553010582923889,
0.3638444244861603,
-0.2604122757911682,
-0.21485067903995514,
0.6051387786865234,
-0.10722257941961288,
0.7851452827453613,
0.5464103817939758,
-0.7637733817100525,
-0.7051675915718079,
-0.4837173819541931,
-0.0090976739302277... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MarkJeong/aihub_food | MarkJeong | 2023-03-09T17:13:22Z | 18 | 0 | null | [
"region:us"
] | 2023-03-09T17:13:22Z | 2023-03-09T02:39:58.000Z | 2023-03-09T02:39:58 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '01011001'
'1': '01012001'
'2': '01012002'
'3': '01012003'
'4': '01012004'
'5': '01012005'
'6': '01012006'
'7': '01013001'
'8': 01014008
'9': 01014009
'10': '01014010'
'11': '01014011'
'12': '01014012'
'13': '01014013'
'14': '01015002'
'15': '01015003'
'16': '01015012'
'17': '01015013'
'18': '01015014'
'19': '01015015'
'20': '01015016'
'21': '01015017'
'22': 01015018
'23': 01015019
'24': '01016001'
'25': '01016002'
'26': '01016003'
'27': '01016004'
'28': '01016005'
'29': '01016006'
'30': '01016007'
'31': 01016008
'32': '02011006'
'33': '02011007'
'34': 02011008
'35': 02011009
'36': '02011010'
'37': '02011011'
'38': '02011012'
'39': '02011013'
'40': '02011014'
'41': '02011015'
'42': '02011016'
'43': '02011017'
'44': 02011018
'45': 02011019
'46': '02011020'
'47': '02011021'
'48': '02011023'
'49': '02011024'
'50': '02011025'
'51': '02011027'
'52': 02011028
'53': 02011029
'54': '02011030'
'55': '02011031'
'56': '02011032'
'57': '02011033'
'58': '02011034'
'59': '02011035'
'60': '02011036'
'61': '02011037'
'62': 02011038
'63': 02011039
'64': '02011040'
'65': '02012001'
'66': '02012002'
'67': '02012003'
'68': '02012004'
'69': '02012005'
'70': '03011001'
'71': '03011002'
'72': '03011003'
'73': '03011004'
'74': '03011005'
'75': '03011006'
'76': '03011007'
'77': 03011008
'78': 03011009
'79': '03011010'
'80': '03011011'
'81': '03012001'
'82': '03012002'
'83': '04011001'
'84': '04011002'
'85': '04011003'
'86': '04011004'
'87': '04011005'
'88': '04011006'
'89': '04011007'
'90': 04011008
'91': '04011010'
'92': '04011011'
'93': '04011012'
'94': '04011013'
'95': '04011014'
'96': '04011015'
'97': '04011016'
'98': '04012001'
'99': '04012002'
'100': '04012003'
'101': '04012004'
'102': '04012005'
'103': '04012006'
'104': '04012007'
'105': 04012008
'106': 04012009
'107': '04012010'
'108': '04012011'
'109': '04012012'
'110': '04012013'
'111': '04013002'
'112': '04013003'
'113': '04013004'
'114': '04013005'
'115': '04013006'
'116': '04013007'
'117': 04013008
'118': 04013009
'119': '04013010'
'120': '04013011'
'121': '04013012'
'122': '04013013'
'123': '04013014'
'124': '04013015'
'125': '04013017'
'126': 04013018
'127': 04013019
'128': '04015003'
'129': '04016001'
'130': '04017001'
'131': '04017002'
'132': 04018001
'133': 04018002
'134': 04018003
'135': 04018004
'136': 04019001
'137': 04019002
'138': 04019003
'139': 04019004
'140': 04019005
'141': 04019006
'142': 04019007
'143': 04019008
'144': '05011001'
'145': '05011002'
'146': '05011004'
'147': 05011008
'148': '05011010'
'149': '05011011'
'150': '05011012'
'151': '05012001'
'152': '05012002'
'153': '05012003'
'154': '05012004'
'155': '05012005'
'156': '05013001'
'157': '06012001'
'158': '06012002'
'159': '06012003'
'160': '06012011'
'161': '07011003'
'162': '07011004'
'163': '07012001'
'164': '07012002'
'165': '07012003'
'166': '07013001'
'167': '07013002'
'168': '07013003'
'169': '07013004'
'170': '07013005'
'171': '07013006'
'172': '07013007'
'173': 07013008
'174': 07013009
'175': '07013010'
'176': '07013011'
'177': 08011004
'178': 08011005
'179': 08011006
'180': 08011007
'181': 08011008
'182': 08012001
'183': 08012002
'184': 08012003
'185': 08012004
'186': 08012005
'187': 08012006
'188': 08012007
'189': 08012008
'190': 08012009
'191': 08012010
'192': 08013001
'193': 08013002
'194': 08013003
'195': 08013004
'196': 08013005
'197': 08013006
'198': 08014001
'199': 08014002
'200': 08014003
'201': 09012001
'202': 09012002
'203': 09013001
'204': 09013002
'205': 09014001
'206': 09014002
'207': 09014003
'208': 09014004
'209': 09015001
'210': 09015002
'211': 09015003
'212': 09016001
'213': '10011001'
'214': '10011002'
'215': '10011003'
'216': '10011004'
'217': '11011001'
'218': '11011002'
'219': '11011003'
'220': '11011004'
'221': '11011005'
'222': '11011006'
'223': '11011007'
'224': '11011008'
'225': '11011009'
'226': '11011010'
'227': '11011011'
'228': '11012001'
'229': '11012002'
'230': '11012003'
'231': '11012004'
'232': '11013001'
'233': '11013002'
'234': '11013003'
'235': '11013004'
'236': '11013005'
'237': '11013006'
'238': '11013007'
'239': '11013009'
'240': '11013010'
'241': '11013011'
'242': '11013012'
'243': '11014001'
'244': '11014002'
'245': '11014003'
'246': '11014004'
'247': '11014005'
'248': '11014006'
'249': '11014007'
'250': '11014008'
'251': '11014009'
'252': '11014010'
'253': '11015001'
'254': '11015002'
'255': '12011001'
'256': '12011002'
'257': '12011003'
'258': '12011004'
'259': '12011005'
'260': '12011006'
'261': '12011007'
'262': '12011008'
'263': '12011009'
'264': '12011010'
'265': '12011011'
'266': '12011012'
'267': '12011013'
'268': '12011014'
'269': '12011015'
'270': '13011001'
'271': '13011002'
'272': '13011003'
'273': '13011011'
'274': '13011012'
'275': '13012001'
'276': '13012002'
'277': '14011001'
'278': '14011002'
'279': '14011004'
'280': '14011005'
'281': '14012001'
'282': '14012002'
'283': '15011001'
'284': '15011002'
'285': '15011003'
'286': '15011004'
'287': '15011005'
'288': '15011006'
'289': '15011007'
'290': '15011008'
'291': '15011009'
'292': '15011010'
'293': '15011011'
'294': '15011012'
'295': '15011013'
'296': '15011014'
'297': '15011015'
'298': '15011016'
'299': '15011017'
'300': '16011001'
'301': '16011002'
'302': '16011003'
'303': '16011004'
'304': '16011005'
'305': '16011006'
splits:
- name: train
num_bytes: 14812723538.728
num_examples: 486839
- name: test
num_bytes: 33069619665.134
num_examples: 21178
- name: validation
num_bytes: 33770989851.48
num_examples: 21180
download_size: 82692432131
dataset_size: 81653333055.342
---
# Dataset Card for "aihub_food"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5185690522193909,
-0.45979389548301697,
-0.007433106657117605,
0.10912389308214188,
0.02014140412211418,
0.09976638108491898,
0.376827597618103,
-0.29454585909843445,
1.0186902284622192,
0.4556339681148529,
-0.6482995748519897,
-0.705787181854248,
-0.708041787147522,
-0.1449395120143890... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IlyaGusev/librusec | IlyaGusev | 2023-03-20T16:03:43Z | 18 | 4 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"region:us"
] | 2023-03-20T16:03:43Z | 2023-03-12T12:57:59.000Z | 2023-03-12T12:57:59 | ---
dataset_info:
features:
- name: id
dtype: uint64
- name: text
dtype: string
splits:
- name: train
num_bytes: 125126513109
num_examples: 223256
download_size: 34905399148
dataset_size: 125126513109
task_categories:
- text-generation
language:
- ru
size_categories:
- 100K<n<1M
---
# Librusec dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
## Description
**Summary:** Based on http://panchenko.me/data/russe/librusec_fb2.plain.gz. Uploaded here for convenience. Additional cleaning was performed.
**Script:** [create_librusec.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_librusec.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** Russian.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/librusec', split="train", streaming=True)
for example in dataset:
print(example["text"])
``` | [
-0.3557703495025635,
-0.3237767517566681,
0.12711502611637115,
0.2186562418937683,
-0.3848736882209778,
-0.1312035322189331,
-0.07975085079669952,
0.062396761029958725,
0.1466267704963684,
0.3756638467311859,
-0.562752366065979,
-0.607941210269928,
-0.27806052565574646,
-0.0192695334553718... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Circularmachines/batch_indexed_parts | Circularmachines | 2023-03-16T12:38:21Z | 18 | 0 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2023-03-16T12:38:21Z | 2023-03-16T07:45:04.000Z | 2023-03-16T07:45:04 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0000'
'1': '0001'
'2': '0002'
'3': '0003'
'4': '0004'
'5': '0005'
'6': '0006'
'7': '0007'
'8': 0008
'9': 0009
'10': '0010'
'11': '0011'
'12': '0012'
'13': '0013'
'14': '0014'
'15': '0015'
'16': '0016'
'17': '0017'
'18': 0018
'19': 0019
'20': '0020'
'21': '0021'
'22': '0022'
'23': '0023'
'24': '0024'
'25': '0025'
'26': '0026'
'27': '0027'
'28': 0028
'29': 0029
'30': '0030'
'31': '0031'
'32': '0032'
'33': '0033'
'34': '0034'
'35': '0035'
'36': '0036'
'37': '0037'
'38': 0038
'39': 0039
'40': '0040'
'41': '0041'
'42': '0042'
'43': '0043'
'44': '0044'
'45': '0045'
'46': '0046'
'47': '0047'
'48': 0048
'49': 0049
'50': '0050'
'51': '0051'
'52': '0052'
'53': '0053'
splits:
- name: train
num_bytes: 482130063.0
num_examples: 27000
- name: test
num_bytes: 121531.0
num_examples: 5
download_size: 477594580
dataset_size: 482251594.0
---
Images automatically labelled by the Batch Indexing Machine, under development | [
-0.6980575323104858,
0.1684921234846115,
0.3755503296852112,
0.048871416598558426,
-0.6576279997825623,
-0.11787063628435135,
0.4330330491065979,
-0.5356347560882568,
0.28549274802207947,
0.7950332760810852,
-0.34734317660331726,
-0.18615838885307312,
-0.7833107113838196,
0.539280772209167... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AyoubChLin/CNN_News_Articles_2011-2022 | AyoubChLin | 2023-04-10T15:29:24Z | 18 | 2 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-04-10T15:29:24Z | 2023-03-19T11:01:10.000Z | 2023-03-19T11:01:10 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: CNN News Article from 20211 to 2022
size_categories:
- n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': business
'1': entertainment
'2': health
'3': news
'4': politics
'5': sport
splits:
- name: train
num_examples: 32218
- name: test
num_examples: 5686
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
---
# CNN News Articles 2011-2022 Dataset
## Introduction
This dataset contains CNN News Articles from 2011 to 2022 after basic cleaning. The dataset includes the following information:
Category
Full text
The data was downloaded from Kaggle at this URL: https://www.kaggle.com/datasets/hadasu92/cnn-articles-after-basic-cleaning. The dataset was split into two sets:
Train set with 32,218 examples
Test set with 5,686 examples
## Usage
This dataset can be used for different natural language processing tasks such as text classification, text summarization, named entity recognition, and more. The dataset is available in Hugging Face Datasets with the ID AyoubChLin/CNN_News_Articles_2011-2022.
## Acknowledgements
The data was collected by the Kaggle user [hadasu92](https://github.com/hadasu). The splitting of the dataset into train and test sets was performed by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/). | [
-0.4622674584388733,
-0.7753210067749023,
0.1109534278512001,
0.3437677323818207,
-0.4657708406448364,
-0.15002095699310303,
-0.2915172278881073,
-0.4359745979309082,
0.159241184592247,
0.5680251121520996,
-0.5918540954589844,
-0.4185836613178253,
-0.7187272906303406,
0.06500642746686935,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yasminesarraj/texts_summary | yasminesarraj | 2023-03-21T14:46:12Z | 18 | 1 | null | [
"license:openrail",
"region:us"
] | 2023-03-21T14:46:12Z | 2023-03-21T14:45:49.000Z | 2023-03-21T14:45:49 | ---
license: openrail
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lansinuote/gen.1.celeba | lansinuote | 2023-03-24T03:46:24Z | 18 | 0 | null | [
"region:us"
] | 2023-03-24T03:46:24Z | 2023-03-24T03:36:48.000Z | 2023-03-24T03:36:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: 5_o_Clock_Shadow
dtype: int64
- name: Arched_Eyebrows
dtype: int64
- name: Attractive
dtype: int64
- name: Bags_Under_Eyes
dtype: int64
- name: Bald
dtype: int64
- name: Bangs
dtype: int64
- name: Big_Lips
dtype: int64
- name: Big_Nose
dtype: int64
- name: Black_Hair
dtype: int64
- name: Blond_Hair
dtype: int64
- name: Blurry
dtype: int64
- name: Brown_Hair
dtype: int64
- name: Bushy_Eyebrows
dtype: int64
- name: Chubby
dtype: int64
- name: Double_Chin
dtype: int64
- name: Eyeglasses
dtype: int64
- name: Goatee
dtype: int64
- name: Gray_Hair
dtype: int64
- name: Heavy_Makeup
dtype: int64
- name: High_Cheekbones
dtype: int64
- name: Male
dtype: int64
- name: Mouth_Slightly_Open
dtype: int64
- name: Mustache
dtype: int64
- name: Narrow_Eyes
dtype: int64
- name: No_Beard
dtype: int64
- name: Oval_Face
dtype: int64
- name: Pale_Skin
dtype: int64
- name: Pointy_Nose
dtype: int64
- name: Receding_Hairline
dtype: int64
- name: Rosy_Cheeks
dtype: int64
- name: Sideburns
dtype: int64
- name: Smiling
dtype: int64
- name: Straight_Hair
dtype: int64
- name: Wavy_Hair
dtype: int64
- name: Wearing_Earrings
dtype: int64
- name: Wearing_Hat
dtype: int64
- name: Wearing_Lipstick
dtype: int64
- name: Wearing_Necklace
dtype: int64
- name: Wearing_Necktie
dtype: int64
- name: Young
dtype: int64
splits:
- name: train
num_bytes: 1474211218.427
num_examples: 202599
download_size: 1396302346
dataset_size: 1474211218.427
---
# Dataset Card for "gen.1.celeba"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7205952405929565,
-0.4230028986930847,
0.036030929535627365,
0.22598525881767273,
-0.08105196058750153,
0.05241331085562706,
0.1917172372341156,
-0.1901242434978485,
0.97756028175354,
0.4299836754798889,
-0.7999136447906494,
-0.748432993888855,
-0.6797525882720947,
-0.24939939379692078,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bharat-raghunathan/indian-foods-dataset | bharat-raghunathan | 2023-03-26T08:58:10Z | 18 | 1 | null | [
"task_categories:image-classification",
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-03-26T08:58:10Z | 2023-03-26T06:26:43.000Z | 2023-03-26T06:26:43 | ---
license: cc0-1.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': biryani
'1': cholebhature
'2': dabeli
'3': dal
'4': dhokla
'5': dosa
'6': jalebi
'7': kathiroll
'8': kofta
'9': naan
'10': pakora
'11': paneer
'12': panipuri
'13': pavbhaji
'14': vadapav
splits:
- name: train
num_bytes: 611741947.222
num_examples: 3809
- name: test
num_bytes: 153961285
num_examples: 961
download_size: 688922167
dataset_size: 765703232.222
task_categories:
- image-classification
- text-to-image
language:
- en
pretty_name: indian-foods
size_categories:
- 1K<n<10K
---
# Dataset Card for Indian Foods Dataset
## Dataset Description
- **Homepage:** https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset
- **Repository:** https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** https://www.kaggle.com/anshulmehtakaggl
### Dataset Summary
This is a multi-category(multi-class classification) related Indian food dataset showcasing [The-massive-Indian-Food-Dataset](https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset).
This card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['biryani', 'cholebhature', 'dabeli', 'dal', 'dhokla', 'dosa', 'jalebi', 'kathiroll', 'kofta', 'naan', 'pakora', 'paneer', 'panipuri', 'pavbhaji', 'vadapav'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and test split. The split sizes are as follows:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 3809 |
| test | 961 |
### Data Instances
Each instance is a picture of the Indian food item, along with the category it belongs to.
#### Initial Data Collection and Normalization
Collection by Scraping data from Google Images + Leveraging some JS Functions.
All the images are resized to (300,300) to maintain size uniformity.
### Dataset Curators
[Anshul Mehta](https://www.kaggle.com/anshulmehtakaggl)
### Licensing Information
[CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
[The Massive Indian Foods Dataset](https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset) | [
-0.31652939319610596,
-0.4543870687484741,
-0.27904778718948364,
0.10528978705406189,
-0.23212125897407532,
0.1596018224954605,
-0.17591191828250885,
-0.32383790612220764,
0.677995502948761,
0.11467573791742325,
-0.5906736254692078,
-0.728686511516571,
-0.6777942776679993,
0.21647880971431... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mohammadjavadpirhadi/fake-news-detection-dataset-english | mohammadjavadpirhadi | 2023-03-26T16:10:25Z | 18 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | 2023-03-26T16:10:25Z | 2023-03-26T14:19:58.000Z | 2023-03-26T14:19:58 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: subject
dtype: string
- name: date
dtype: string
- name: label
dtype:
class_label:
names:
'0': real
'1': fake
splits:
- name: train
num_bytes: 93521249
num_examples: 35918
- name: test
num_bytes: 23506751
num_examples: 8980
download_size: 71290190
dataset_size: 117028000
license: mit
task_categories:
- text-classification
language:
- en
pretty_name: Fake News Detection English
size_categories:
- 10K<n<100K
---
# Dataset Card for "fake-news-detection-dataset-english"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44555309414863586,
-0.4820404052734375,
0.26543575525283813,
0.3758580684661865,
-0.4298529028892517,
0.13042360544204712,
0.05910542234778404,
-0.23343685269355774,
0.937252402305603,
0.32541751861572266,
-0.7209453582763672,
-0.8340304493904114,
-0.727103054523468,
-0.0492111779749393... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
suolyer/pile_pubmed-central | suolyer | 2023-03-27T03:06:17Z | 18 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-03-27T03:06:17Z | 2023-03-26T16:39:32.000Z | 2023-03-26T16:39:32 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GarayMC/cracks | GarayMC | 2023-03-30T23:05:36Z | 18 | 0 | null | [
"region:us"
] | 2023-03-30T23:05:36Z | 2023-03-30T23:04:34.000Z | 2023-03-30T23:04:34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': images
'1': label
'2': test
'3': train
'4': valid
splits:
- name: train
num_bytes: 7553040.0
num_examples: 214
download_size: 7153050
dataset_size: 7553040.0
---
# Dataset Card for "cracks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3114170432090759,
-0.41325145959854126,
0.3334382176399231,
0.5239436626434326,
-0.25111687183380127,
0.1875617951154709,
0.5634939670562744,
-0.37121495604515076,
0.874118983745575,
0.4241523742675781,
-0.8663849234580994,
-0.6797945499420166,
-0.6679697632789612,
-0.19913272559642792,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
artemkramov/coreference-dataset-ua | artemkramov | 2023-04-02T11:54:35Z | 18 | 4 | null | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:uk",
"coreference-resolution",
"coreference",
"anaphora",
"region:us"
] | 2023-04-02T11:54:35Z | 2023-04-01T13:07:36.000Z | 2023-04-01T13:07:36 | ---
task_categories:
- token-classification
language:
- uk
pretty_name: 'Silver Ukrainian Coreference Dataset '
tags:
- coreference-resolution
- coreference
- anaphora
size_categories:
- 10K<n<100K
---
# Silver Ukrainian Coreference Dataset
## Dataset Description
### Dataset Summary
A silver coreference resolution dataset for the Ukrainian language. The dataset was generated automatically with the usage of the word alignment method from the following English dataset: https://github.com/d5555/Coreference-dataset.
The word alignment method was implemented by Andrii Kursin (aqrsn@ukr.net).
### Languages
- Ukrainian
## Dataset Structure
### Data Fields
Each sample of the dataset consists of the following fields:
- **doc_key** - document identifier.
- **clusters** - list of clusters, where each cluster consists of the list of mentions. Each mention is represented as a list of two indices: the first index denotes the first word of the mention, the second index denotes the last word of the mention.
- **sentences** - list of sentences where each sentence is represented as a list of words.
- **tokens** - list of words.
- **speakers** - list of speakers which is currently filled with dummy input.
### Data Splits
The dataset is divided into two parts:
- training set;
- validation set.
A test set is absent as far as the dataset is generated automatically.
## Dataset Creation
### Source Data
The dataset was created from the following dataset: https://github.com/d5555/Coreference-dataset.
### Contributions
The code for the translation of samples with further alignment was created by Andrii Kursin (aqrsn@ukr.net). The dataset was generated by Artem Kramov (https://www.linkedin.com/in/artem-kramov-0b3731100/). | [
-0.16330009698867798,
-0.02833976037800312,
0.23516473174095154,
-0.3602348268032074,
-0.3690479099750519,
0.2684217393398285,
-0.2681781053543091,
-0.22300440073013306,
0.2344464510679245,
0.29301461577415466,
-0.5375493764877319,
-1.0035433769226074,
-0.5414460897445679,
0.12856669723987... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Devika03/Research_Paper_Summarization_Dataset | Devika03 | 2023-04-05T05:59:26Z | 18 | 2 | null | [
"region:us"
] | 2023-04-05T05:59:26Z | 2023-04-05T05:57:43.000Z | 2023-04-05T05:57:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kevin-M-Smith/flint_images_300_300 | Kevin-M-Smith | 2023-04-08T14:08:46Z | 18 | 0 | null | [
"region:us"
] | 2023-04-08T14:08:46Z | 2023-04-08T14:07:19.000Z | 2023-04-08T14:07:19 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': clutter
'1': email
'2': email-squished
'3': handwritten-document
'4': spreadsheet
'5': typeset-document
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 176737820.0
num_examples: 4965
- name: test
num_bytes: 44473375.0
num_examples: 1242
download_size: 221048030
dataset_size: 221211195.0
---
# Dataset Card for "flint_images_300_300"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5896896123886108,
0.12213176488876343,
0.4714975655078888,
0.370746910572052,
-0.3111485242843628,
-0.012455740943551064,
0.46596038341522217,
-0.09153878688812256,
0.7395042181015015,
0.30644679069519043,
-0.7120407819747925,
-0.7646582126617432,
-0.6199373006820679,
-0.108777306973934... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mattymchen/mr | mattymchen | 2023-04-19T15:20:03Z | 18 | 0 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"language:en",
"region:us"
] | 2023-04-19T15:20:03Z | 2023-04-19T14:44:35.000Z | 2023-04-19T14:44:35 | ---
language:
- en
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 1352524
num_examples: 10662
download_size: 883903
dataset_size: 1352524
---
# Dataset Card for "mr"
## Dataset Description
Movie review dataset from SentEval.
## Data Fields
- `sentence`: Complete sentence expressing an opinion about a film.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.621616780757904,
-0.4858849346637726,
-0.05480053648352623,
0.02367899753153324,
-0.4620787501335144,
0.1302286982536316,
0.08279111236333847,
0.09562218189239502,
0.8823043704032898,
0.5169836282730103,
-1.0588370561599731,
-0.672196090221405,
-0.6325641870498657,
0.0496254488825798,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Celestinian/minimal-wikipedia-corpus-raw | Celestinian | 2023-04-29T17:05:03Z | 18 | 1 | null | [
"license:mit",
"region:us"
] | 2023-04-29T17:05:03Z | 2023-04-27T21:22:23.000Z | 2023-04-27T21:22:23 | ---
license: mit
datasetsviewer:
not_supported: true
---
A dataset of Wikipedia's most popular articles, an extensive collection of unprocessed text data covering a diverse range of topics including history, science, critical thinking, mathematics, and more.
This dataset aims to facilitate the pretraining of large language models by providing a vast corpus of informative content. This dataset is an excellent resource for researchers and developers looking to pretrain large language models.
Its unprocessed format and diverse range of topics make it ideal for pretraining custom models that can understand and generate natural language text. | [
-0.779904842376709,
-0.6619335412979126,
0.14182917773723602,
-0.08321219682693481,
-0.3304077386856079,
-0.10005277395248413,
-0.49131834506988525,
-0.049962058663368225,
0.3685111701488495,
0.5474565625190735,
-0.5355814695358276,
-0.4513798654079437,
-0.35848477482795715,
0.096550107002... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sharad/chatgpt-paraphrases-simple | sharad | 2023-05-08T09:09:04Z | 18 | 3 | null | [
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"paraphrase",
"region:us"
] | 2023-05-08T09:09:04Z | 2023-05-07T08:09:03.000Z | 2023-05-07T08:09:03 | ---
license: apache-2.0
language:
- en
tags:
- paraphrase
size_categories:
- 1M<n<10M
dataset_info:
features:
- name: s1
dtype: string
- name: s2
dtype: string
splits:
- name: train
num_bytes: 1283650386
num_examples: 6286314
download_size: 211207464
dataset_size: 1283650386
pretty_name: ChatGPT Paraphrase
---
This dataset is simplified version of [ChatGPT Paraphrases](https://huggingface.co/datasets/humarin/chatgpt-paraphrases). And aims to take away the pain of expanding original dataset into unique paraphrase pairs.
# Structure:
Dataset is not divided into train/test split. And contains 6.3 million unique paraphrases(6x5x420000/2 = 6.3 million). Dataset contains following 2 columns-
1. s1 - Sentence
2. s2 - Paraphrase
**Original Dataset Structure:**
The original dataset has following 4 columns-
1. text - 420k Unique sentence
2. paraphrases - List of 5 unique paraphrases generated by ChatGPT
3. category - Questions / Sentence
4. source - Quora/CNN/Others
For more information, usage rights, and legal disclaimer, check out [original dataset](https://huggingface.co/datasets/humarin/chatgpt-paraphrases). | [
-0.05766491964459419,
-0.6245712041854858,
0.043848879635334015,
0.46552741527557373,
-0.7115259170532227,
-0.2703390121459961,
-0.17308436334133148,
0.027841271832585335,
0.19318178296089172,
0.9193432927131653,
-0.6001059412956238,
-0.33790114521980286,
-0.42056024074554443,
0.1823718696... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
techiaith/banc-trawsgrifiadau-bangor | techiaith | 2023-10-26T09:42:39Z | 18 | 1 | null | [
"size_categories:10K<n<100K",
"language:cy",
"license:cc0-1.0",
"verbatim transcriptions",
"speech recognition",
"region:us"
] | 2023-10-26T09:42:39Z | 2023-05-11T13:08:07.000Z | 2023-05-11T13:08:07 | ---
license: cc0-1.0
language:
- cy
tags:
- verbatim transcriptions
- speech recognition
pretty_name: 'Banc Trawsgrifiadau Bangor'
size_categories:
- 10K<n<100K
---
[See below for English](#bangor-transcription-bank)
# Banc Trawsgrifiadau Bangor
Dyma fanc o 30 awr 20 munud a 41 eiliad o segmentau o leferydd naturiol dros hanner cant o gyfranwyr ar ffurf ffeiliau mp3, ynghyd â thrawsgrifiadau 'verbatim' cyfatebol o’r lleferydd ar ffurf ffeil .tsv. Mae'r mwyafrif o'r lleferydd yn leferydd digymell, naturiol. Dosbarthwn y deunydd hwn o dan drwydded agored CC0.
## Pwrpas
Pwrpas y trawsgrifiadau hyn yw gweithredu fel data hyfforddi ar gyfer modelau adnabod lleferydd, gan gynnwys [ein modelau wav2vec](https://github.com/techiaith/docker-wav2vec2-cy). Ar gyfer y diben hwnnw, mae gofyn am drawsgrifiadau mwy verbatim o'r hyn a ddywedwyd na'r hyn a welir mewn trawsgrifiadau traddodiadol ac mewn isdeitlau, felly datblygwyd confensiwn arbennig ar gyfer y gwaith trawsgrifio ([gweler isod](#confensiynau_trawsgrifio)). Gydag ein modelau wav2vec, caiff cydran ychwnaegol, sef 'model iaith' ei defnyddio ar ôl y model adnabod lleferydd i safoni mwy ar allbwn y model iaith i fod yn debycach i drawsgrifiadau traddodiadol ac isdeitlau.
Rydyn ni wedi darparu 3 ffeil .tsv, sef clips.tsv, train.tsv a test.tsv. Mae clips.tsv yn cynnwys ein trawsgrifiadau i gyd. Crëwyd train.tsv a test.tsv er mewn darparu setiau 'safonol' sy'n caniatáu i ddefnyddwyr allu gymharu modelau gan wahanol hyfforddwyr yn deg,hynny yw fe'u crëwyd at bwrpas meincnodi. Mae train.tsv yn cynnwys 80% o'n trawsgrifiadau, a test.tsv yn cynnwys y 20% sy'n weddill.
Dyma enghraifft o gynnwys y data:
```
audio_filename audio_filesize transcript duration
f86a046fd0964e0386d8c1363907183d.mp3 898272 *post industrial* yym a gyda yy dwi'n ca'l deud 5092
f0c2310fdca34faaa83beca5fa7ed212.mp3 809720 sut i ymdopio felly, wedyn erbyn hyn mae o nôl yn y cartra 4590
3eec3feefe254c9790739c22dd63c089.mp3 1335392 Felly ma' hon hefyd yn ddogfen fydd yn trosglwyddo gyda'r plant bobol ifanc o un cam i'r llall ac hefyd erbyn hyn i'r coleg 'lly. 7570
```
Ceir pedair colofn yn y ffeiliau .tsv. Y cyntaf yw enw’r ffeil sain. Maint y ffeil sain yw’r ail. Y trawsgrifiad ei hun sydd yn y drydedd golofn. Hyd y clip sain sydd yn yr olaf.
Dyma'r wybodaeth am y colofnau.
| Maes| Esboniad |
| ------ | ------ |
| `audio_filename`| Enw'r ffeil sain o fewn y ffolder 'clips'|
| `audio_filesize` | Maint y ffeil|
| `transcript` | Trawsgrifiad |
| `duration` | Hyd amser y clip mewn milliseconds. |
## Y Broses o Greu’r Adnodd
Casglwyd y ffeiliau sain yn bennaf o bodlediadau Cymraeg gyda chaniatâd eu perchnogion yn ogystal â'r cyfranwyr unigol. Rydym yn ddiolchgar tu hwnt i’r bobl yna. Yn ogystal, crewyd rhywfaint o sgriptiau ar batrwm eitemau newyddion ac erthyglau a'u darllen gan ymchwilwyr yr Uned Technolegau Iaith er mwyn sicrhau bod cynnwys o'r math hwnnw yn y banc.
Gyrrwyd y ffeiliau sain trwy ein trawsgrifiwr awtomataidd mewnol i segmentu’r sain a chreu trawsgrifiadau amrwd. Defnyddiwyd pecyn trawsgrifio Elan 6.4 (ar gael o https://archive.mpi.nl/tla/elan) gan drawsgrifwyr profiadol i wrando ar a chywiro’r trawsgrifiad amrwd.
## Nodyn Ynghylch Anonymeiddio’r Cynnwys
Er tegwch i’r cyfranwyr, rydyn ni wedi anonymeiddio’r trawsgrifiadau. Penderfynwyd anonymeiddio nid yn unig enwau pobl unigol, ond hefyd unrhyw Wybodaeth Bersonol Adnabyddadwy (PII) gan gynnwys, ond nid yn gyfunedig i:
* Rhif ffôn
* Teitlau swyddi/galwedigaethau
* Gweithleoedd
* Enwau mannau cyhoeddus
* Lleoliad daearyddol
* Dyddiadau/amseroedd
Wrth drawsgrifio marciwyd pob segment oedd yn cynnwys PII gyda’r tag \<PII>, yna wnaethom hidlo allan pob segment oedd yn cynnwys tag \<PII> er mwyn sicrhau nad oedd unrhyw wybodaeth bersonol yn cael eu cyhoeddi fel rhan o’r adnodd hwn.
Rydym hefyd wedi newid trefn trawsgrifiadau i fod ar hap, felly nid ydynt wedi'u cyhoeddi yn y drefn y maent yn eu ymddangos yn y ffeiliau sain gwreiddiol.
<a name="confensiynau_trawsgrifio"></a>
## Confensiynau Trawsgrifio
Datblygwyd y confensiynau trawsgrifio hyn er mwyn sicrhau fod y trawsgrifiadau nid yn unig yn verbatim ond hefyd yn gyson. Fe’u datblygwyd trwy gyfeirio at gonfensiynau a ddefnyddir gan yr Uned yn y gorffennol, confensiynau eraill megis y rhai a defnyddiwyd yng nghorpora CorCenCC, Siarad, CIG1 a CIG2, a hefyd trwy broses o ddatblygu parhaol wrth i’r tîm ymgymryd â’r dasg o drawsgrifio.
**NODWCH** - gan ein bod wedi datblygu’r egwyddorion trawsgrifio yn rhannol wrth ymgymryd â’r dasg o drawsgrifio nid yw’r trawsgrifiadau cynnar o reidrwydd yn dilyn yr egwyddorion cant y cant. Bwriadwn wirio’r trawsgrifiadau wedi i ni fireinio’r confensiynau.
### Collnodau
Ni ddefnyddiwyd collnodau i marcio pob un llythyren a hepgorwyd gan siaradwyr. Er enghraifft, _gwitho_ (sef ynganiad o _gweithio_) sy’n gywir, nid _gw’ith’o_
Yn hytrach, defnyddiwyd collnodau i wahaniaethu rhwng gwahanol eiriau oedd yn cael eu sillafu'r union yr un fath fel arall. Er enghraifft rydym yn defnyddio collnod o flaen _’ma_ (sef _yma_) i wahaniaethu rhyngddo â _ma’_ (sef _mae_), _gor’o’_ i wahaniaethu rhwng _gorfod_ a ffurf trydydd person unigol amser dibynnol presennol _gori_, a _pwysa’_ i wahaniaethu rhwng ffurf luosog _pwys_ a nifer o ffurfiau berfol posib _pwyso_.
Fodd bynnag, ceir eithriad i’r rheol hon, a hynny pan fo sillafu gair heb gollnod yn newid sŵn y llythyren cyn neu ar ôl y collnod, ac felly _Cymra’g_ sy’n gywir, nid _Cymrag_.
### Tagiau
Wrth drawsgrifio, defnyddiwyd y tagiau hyn i recordio elfennau oedd y tu hwnt i leferydd yr unigolion:
* \<anadlu>
* \<aneglur>
* \<cerddoriaeth>
* \<chwerthin>
* \<chwythu allan>
* \<clirio gwddf>
* \<distawrwydd>
* \<ochneidio>
* \<PII>
* \<peswch>
* \<sniffian>
* \<twtian>
Rhagwelwn y bydd y rhestr hon yn chwyddo wrth i ni drawsgrifio mwy o leferydd ac wrth i ni daro ar draws mwy o elfennau sydd y tu hwnt i leferydd unigolion.
### Synau nad ydynt yn eiriol
Ymdrechwyd i drawsgrifio synau nad ydynt yn eiriol yn gyson. Er enghraifft, defnyddiwyd _yy_ bob tro (yn hytrach nag _yrr_, _yr_ neu _err_ neu gymysgedd o’r rheiny) i gynrychioli neu adlewyrchu’r sŵn a wnaethpwyd pan oedd siaradwr yn ceisio meddwl neu oedi wrth siarad.
Defnyddiwyd y canlynol wrth drawsgrifio:
* yy
* yym
* hmm
* m-hm
Eto, rhagwelwn y bydd y rhestr hon yn chwyddo wrth i ni drawsgrifio mwy o leferydd ac wrth i ni daro ar draws mwy o synau nad ydynt yn eiriol.
### Geiriau Saesneg
Rydym wedi amgylchynu bob gair neu ymadrodd Saesneg gyda sêr, er enghraifft:
> Dwi’n deall **\*sort of\***.
### Cymreigio berfenwau
Pan fo siaradwyr yn defnyddio geiriau Saesneg fel berfenwau (trwy ychwanegu _io_ ar ddiwedd y gair er enghraifft) rydym wedi ymdrechu i sillafu’r gair gan ddefnyddio confensiynau sillafu Cymreig yn hytrach nag ychwanegu _io_ at sillafiad Saesneg o’r gair. Er enghraifft rydym wedi trawsgrifio _heitio_ yn hytrach na _hateio_, a _lyfio_ yn hytrach na _loveio_.
### Cywiro cam-siarad
I sicrhau ein bod ni’n glynu at egwyddorion trawsgrifio verbatim penderfynwyd na ddylem gywiro cam-siarad neu gam-ynganu siaradwyr. Er enghraifft, yn y frawddeg ganlynol:
> enfawr fel y diffyg o fwyd yym **efallu** cam-drin
mae'n amlwg mai’r gair _efallai_ sydd dan sylw mewn gwirionedd, ond fe’i trawsgrifiwyd fel ei glywir.
### Atalnodi
Defnyddiwyd atalnodau llawn, marciau cwestiwn ac ebychnodau wrth drawsgrifio’r lleferydd.
Rydym wedi amgylchynu bob gair neu ymadrodd sydd wedi ei dyfynnu gyda _”_, er enghraifft:
> Dywedodd hi **”Dwi’n mynd”** ond aeth hi ddim.
### Nodyn ynghylch ein defnydd o gomas
Gan mai confensiwn ysgrifenedig yw coma yn y bôn, ni ddefnyddiwyd comas cymaint wrth drawsgrifio. Byddai defnyddio coma lle y disgwylir i’w weld mewn testun ysgrifenedig ddim o reidrwydd wedi adlewyrchu lleferydd yr unigolyn. Dylid cadw hynny mewn cof wrth ddarllen y trawsgrifiadau.
### Sillafu llythrennau
Sillafwyd llythrennau unigol yn hytrach na thrawsgrifio’r llythrennau unigol yn unig.
Hynny yw, hyn sy’n gywir:
> Roedd ganddo **ow si di**
**ac nid:**
> Roedd ganddo **O C D**
**na chwaith:**
> Roedd ganddo **OCD**
### Rhifau
Trawsgrifiwyd rhifau fel geiriau yn hytrach na digidau, hynny yw hyn sy’n gywir:
> Y flwyddyn dwy fil ac ugain
**ac nid:**
> Y flwyddyn 2020
### Gorffen gair ar ei hanner
Marciwyd gair oedd wedi ei orffen ar ei hanner gyda `-`. Er enghraifft:
> Ma’n rhaid i mi **ca-** cael diod.
### Gorffen brawddeg ar ei hanner/ailddechrau brawddeg
Marciwyd brawddeg oedd wedi ei gorffen ar ei hanner gyda `...`. Er enghraifft:
> Ma’n rhaid i mi ca’l... Ma’ rhaid i mi brynu diod.
### Siaradwr yn torri ar draws siaradwr arall
Ceir yn y data llawer o enghreifftiau o siaradwr yn torri ar draws y prif leferydd gan ddefnyddio synau nad ydynt yn eiriol, geiriau neu ymadroddion (megis _m-hm_, _ie_, _ydi_, _yn union_ ac ati). Pan oedd y ddau siaradwr i'w clywed yn glir ag ar wahân, rhoddwyd `...` ar ddiwedd rhan gyntaf y lleferydd toredig, a `...` arall ar ddechrau ail ran y lleferydd toredig, fel yn yr enghraifft ganlynol:
> Ond y peth yw... M-hm. ...mae’r ddau yn wir
Pan nad oedd y ddau siaradwyr i'w clywed yn glir ag ar wahân, fe hepgorwyd y lleferydd o’r data.
### Rhegfeydd
Dylid nodi ein bod ni heb hepgor rhegfeydd wrth drawsgrifio.
## Y Dyfodol
Wrth ddefnyddio’r banc trawsgrifiadau dylid cadw mewn cof mai fersiwn cychwynnol ydyw. Bwriadwn fireinio a chysoni ein trawsgrifiadau ymhellach, ac ychwanegu mwy fyth o drawsgrifiadau i’r banc yn rheolaidd dros y flwyddyn nesaf
## Cyfyngiadau
Er mwyn parchu'r cyfrannwyr, wrth lwytho'r data hwn i lawr rydych yn cytuno i beidio â cheisio adnabod y siaradwyr yn y data.
## Diolchiadau
Diolchwn i'r cyfrannwyr am eu caniatâd i ddefnyddio'u lleferydd. Rydym hefyd yn ddiolchgar i Lywodraeth Cymru am ariannu’r gwaith hwn fel rhan o broject Technoleg Testun, Lleferydd a Chyfieithu ar gyfer yr Iaith Gymraeg.
---
# Bangor Transcription Bank
This resource is a bank of 30 hours 20 minutes and 41 seconds of segments of natural speech from over 50 contributors in mp3 file format, together with corresponding 'verbatim' transcripts of the speech in .tsv file format. The majority of the speech is spontaneous, natural speech. We distribute this material under a CC0 open license.
## Purpose
The purpose of these transcripts is to act as training data for speech recognition models, including [our wav2vec models](https://github.com/techiaith/docker-wav2vec2-cy). For that purpose, transcriptions are more verbatim than what is seen in traditional transcriptions and than what is required for subtitling purposes, thus a bespoke set of conventions has been developed for the transcription work ([see below](#transcription_conventions) ). Our wav2vec models use an auxiliary component, namely a 'language model', to further standardize the speech recognition model’s output in order that it be more similar to traditional transcriptions and subtitles.
We have provided 3 .tsv files, namely clips.tsv, train.tsv and test.tsv. clips.tsv contains all of our transcripts. train.tsv and test.tsv were created to provide 'standard' sets that allow users to compare models trained by different trainers fairly, i.e. they were created as a 'benchmark'. train.tsv contains 80% of our transcripts, and test.tsv contains the remaining 20%.
Here is an example of the data content:
```
audio_filename audio_filesize transcript duration
f86a046fd0964e0386d8c1363907183d.mp3 898272 *post industrial* yym a gyda yy dwi'n ca'l deud 5092
f0c2310fdca34faaa83beca5fa7ed212.mp3 809720 sut i ymdopio felly, wedyn erbyn hyn mae o nôl yn y cartra 4590
3eec3feefe254c9790739c22dd63c089.mp3 1335392 Felly ma' hon hefyd yn ddogfen fydd yn trosglwyddo gyda'r plant bobol ifanc o un cam i'r llall ac hefyd erbyn hyn i'r coleg 'lly. 7570
```
There are four columns in the .tsv files. The first is the name of the audio file. The second is the size of the audio file. The transcript itself appears in the third column. The length of the audio clip appears in the last.
Here is the information about the columns.
| Field| Explanation |
| ------ | ------ |
| `audio_filename`| The name of the audio file within the 'clips' folder|
| `audio_filesize` | The size of the file |
| `transcript` | Transcript |
| `duration` | Duration of the clip in milliseconds. |
## The Process of Creating the Resource
The audio files were mainly collected from Welsh podcasts, after having gained the consent of the podcast owners and individual contributors to do so. We are extremely grateful to those people. In addition, some scripts were created which mimicked the pattern of news items and articles. These scripts were then read by Language Technologies Unit researchers in order to ensure that content of that type was included in the bank.
The audio files were run through our in-house automated transcriber to segment the audio and create raw transcripts. Using Elan 6.4 (available from https://archive.mpi.nl/tla/elan), experienced transcribers listened to and corrected the raw transcript.
## A Note About Content Anonymization
Out of respect to the contributors, we have anonymised all transcripts. It was decided to anonymize not only the names of individual people, but also any other Personally Identifiable Information (PII) including, but not limited to:
* Phone number
* Job titles/occupations
* Workplaces
* Names of public places
* Geographical location
* Dates/times
When transcribing, all segments containing PII were marked with the \<PII> tag, we then filtered out all segments containing a \<PII> tag to ensure no personal information was published as part of this resource.
We have also randomized the order of the segments so that they are not published in the order they appeared in the original audio files.
<a name="transcription_conventions"></a>
## Transcription Conventions
These transcription conventions were developed to ensure that the transcriptions were not only verbatim but also consistent. They were developed by referring to conventions used by the Unit in the past, conventions such as those used in the CorCenCC, Siarad, CIG1 and CIG2 corpora, and also through a process of ongoing development as the team undertook the task of transcription.
**NOTE** - as we have partially developed the conventions at the same time as undertaking the task of transcription the early transcriptions may not follow the latest principles faithfully. We intend to check the transcripts after we have refined the conventions.
### Apostrophes
Apostrophes were not used to mark every single letter omitted by speakers. For example, _gwitho_ (which is a pronunciation of _gweithio_) is correct, not _gw’ith'o_.
Rather, apostrophes were used to distinguish between different words that were otherwise spelled identically. For example we use an apostrophe in front of _'ma_ (a pronunciation of _yma_) to distinguish it from _ma'_ (a pronunciation of _mae_), _gor'o'_ to distinguish between _gorfod_ and the third person singular form of the present dependent tense _gori_, and _pwysa'_ to distinguish between the plural form of _pwys_ and a number of possible verb forms of _pwyso_.
However, there is an exception to this rule, that being when spelling a word without an apostrophe would change the sound of the letter before or after the apostrophe, thus _Cymra'g_ is correct, not _Cymrag_.
### Tags
When transcribing, these tags were used to record elements that were external to the speech of the individuals:
* \<anadlu>
* \<aneglur>
* \<cerddoriaeth>
* \<chwerthin>
* \<chwythu allan>
* \<clirio gwddf>
* \<distawrwydd>
* \<ochneidio>
* \<PII>
* \<peswch>
* \<sniffian>
* \<twtian>
We anticipate that this list will grow as we transcribe more speech and as we come across more elements that are external to the speech of individuals.
### Non-verbal sounds
Efforts were made to transcribe non-verbal sounds consistently. For example, _yy_ was always used (rather than _yrr_, _yr_ or _err_, or a mixture of those) to represent or reflect the sound made when a speaker was trying to think or paused in speaking.
The following were used in transcription:
* yy
* yym
* hmm
* m-hm
Again, we anticipate that this list will grow as we transcribe more speech and as we encounter more non-verbal sounds.
### English words
We have surrounded each English word or phrase with asterixis, for example:
> Dwi’n deall **\*sort of\***.
### Adapting English words as Welsh language infinitives
When speakers use English words as infinitives (by adding _io_ at the end of the word for example) we have endeavoured to spell the word using Welsh spelling conventions rather than adding _io_ to the English spelling of the word. For example we have transcribed _heitio_ instead of _hateio_, and _lyfio_ instead of _loveio_.
### Correction of mis-pronunciations
To ensure that we adhere to the principles of verbatim transcription it was decided that we should not correct speakers' mis-pronunciations. For example, in the following sentence:
> enfawr fel y diffyg o fwyd yym **efallu** cam-drin
it is clear that _efallai_ is the intended word, but it is transcribed as it is heard.
### Punctuation
Full stops, question marks and exclamation marks were used when transcribing the speech.
We have surrounded all quoted words or phrases with _”_, for example:
> Dywedodd hi **”Dwi’n mynd”** ond aeth hi ddim.
### A note about our use of commas
As a comma is essentially a convention used for written text, commas were not used prolifically in transcription. Using a comma where one would expected to see it in a written text during transcription would not necessarily have reflected the individual's speech. This should be borne in mind when reading the transcripts.
### Individual letters
Individual letters were spelled out rather than being transcribed as individual letters.
That is, this is correct:
> Roedd ganddo **ow si di**
**not:**
> Roedd ganddo **O C D**
**nor:**
> Roedd ganddo **OCD**
### Numbers
Numbers were transcribed as words rather than digits, thus this is correct:
> Y flwyddyn dwy fil ac ugain
**rather than:**
> Y flwyddyn 2020
### Half-finished words
Half-finished words are marked with a `-`. For example:
> Ma’n rhaid i mi **ca-** cael diod.
### Half-finished/restarted sentences
Half-finished sentences are marked with a `...`. For example:
> Ma’n rhaid i mi ca’l... Ma’ rhaid i mi brynu diod.
### Speaker interruptions
There are many examples of a speaker interrupting another speaker by using non-verbal sounds, words or phrases (such as _m-hm_, _ie_, _ydi_, _yn union_ etc.) in the data. When the two speakers could be heard clearly and distinctly, a `...` was placed at the end of the first part of the broken speech, and another `...` at the beginning of the second part of the broken speech, as in the following example:
> Ond y peth yw... M-hm. ...mae’r ddau yn wir
When the two speakers could not be heard clearly and distinctly, the speech was omitted from the data.
### Swearwords
It should be noted that we have not omitted swearwords when transcribing.
## The future
That this is an initial version of the transcript bank should be borne in mind when using this resource. We intend to refine and harmonize our transcripts further, and add yet more transcripts to the bank regularly over the next year.
## Restrictions
In order to respect the contributors, by downloading this data you agree not to attempt to identify the speakers in the data.
## Acknowledgements
We thank the contributors for their permission to use their speech. We are also grateful to the Welsh Government for funding this work as part of the Text, Speech and Translation Technology project for the Welsh Language.
| [
-0.5388892889022827,
-0.47150665521621704,
0.6293522715568542,
0.41371554136276245,
-0.7244849801063538,
-0.2989782392978668,
0.25441649556159973,
-0.6576597690582275,
1.2019389867782593,
0.21133333444595337,
-0.7108180522918701,
-0.5284664630889893,
-0.6123629212379456,
0.3902950286865234... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ashish08/celeb-identities | Ashish08 | 2023-05-13T13:40:46Z | 18 | 0 | null | [
"region:us"
] | 2023-05-13T13:40:46Z | 2023-05-13T12:23:44.000Z | 2023-05-13T12:23:44 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': David_Schwimmer
'1': Megan_Fox
'2': Mila_Kunis
'3': Ryan_Reynolds
'4': Scarlett_Johansson
'5': Wayne_Rooney
splits:
- name: train
num_bytes: 914546.0
num_examples: 18
download_size: 916734
dataset_size: 914546.0
---
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46353304386138916,
-0.25037872791290283,
0.00394661957398057,
0.09495989233255386,
-0.06635984778404236,
0.3351334035396576,
0.2677997946739197,
-0.30673035979270935,
0.9174419045448303,
0.39497241377830505,
-0.8485506176948547,
-0.641608715057373,
-0.6570073962211609,
-0.26624107360839... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
0x22almostEvil/reasoning-gsm-qna-oa | 0x22almostEvil | 2023-05-13T15:43:31Z | 18 | 2 | null | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"QnA",
"math",
"programming",
"region:us"
] | 2023-05-13T15:43:31Z | 2023-05-13T15:09:16.000Z | 2023-05-13T15:09:16 | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- QnA
- math
- programming
size_categories:
- 1K<n<10K
---
# Dataset Card for GSM QnA reasoning with ~8.8K entries.
### Dataset Summary
Contains Parquet of a list of instructions and answers.
Each row consists of
* INSTRUCTION
* RESPONSE
* SOURCE
* METADATA (json with language).
### Original Datasets are available here:
* https://huggingface.co/datasets/gsm8k
* https://huggingface.co/datasets/reasoning-machines/gsm-hard | [
-0.5430963635444641,
-0.30613288283348083,
0.5153007507324219,
0.06700361520051956,
-0.3773798644542694,
-0.1758018136024475,
0.20678573846817017,
0.11917386949062347,
0.26212823390960693,
0.8173742294311523,
-0.5457764267921448,
-0.7724186778068542,
-0.37833234667778015,
0.081920288503170... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pranavkpba2000/skin_cancer_dataset | Pranavkpba2000 | 2023-05-14T08:47:49Z | 18 | 1 | null | [
"region:us"
] | 2023-05-14T08:47:49Z | 2023-05-14T08:40:43.000Z | 2023-05-14T08:40:43 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AK
'1': BCC
'2': BKL
'3': DF
'4': MEL
'5': NV
'6': SCC
'7': VASC
splits:
- name: train
num_bytes: 9380942753.528
num_examples: 28516
- name: test
num_bytes: 1445202498.285
num_examples: 7105
download_size: 9852696203
dataset_size: 10826145251.813
---
# Dataset Card for "skin_cancer_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.17343388497829437,
-0.2547973096370697,
0.2680305540561676,
0.06488954275846481,
-0.2583440840244293,
0.10897115617990494,
0.4310320317745209,
-0.2194264978170395,
0.8806508183479309,
0.7337015271186829,
-0.745919942855835,
-1.100550889968872,
-0.6379649043083191,
-0.42756956815719604,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
darknoon/noto-emoji-vector-512-svg | darknoon | 2023-05-14T18:11:12Z | 18 | 1 | null | [
"region:us"
] | 2023-05-14T18:11:12Z | 2023-05-14T14:44:25.000Z | 2023-05-14T14:44:25 | ---
dataset_info:
features:
- name: image
dtype: image
- name: codepoints
sequence: int64
- name: name
dtype: string
- name: text
dtype: string
- name: svg_path
dtype: string
- name: svg_text
dtype: string
splits:
- name: train
num_bytes: 90176885.81
num_examples: 2329
download_size: 74032133
dataset_size: 90176885.81
---
# Dataset Card for "noto-emoji-vector-512-svg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5304929614067078,
-0.26022210717201233,
0.002083241008222103,
0.21952833235263824,
-0.4150144159793854,
0.16365554928779602,
0.22843293845653534,
-0.08190734684467316,
1.0936380624771118,
0.3294034004211426,
-1.017305612564087,
-0.940138578414917,
-0.802193284034729,
0.15440429747104645... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cofacts/line-msg-fact-check-tw | Cofacts | 2023-10-11T13:06:33Z | 18 | 1 | null | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:zh",
"license:cc-by-sa-4.0",
"fact-checking",
"crowd-sourcing",
"region:us"
] | 2023-10-11T13:06:33Z | 2023-05-16T05:09:10.000Z | 2023-05-16T05:09:10 | ---
license: cc-by-sa-4.0
language:
- zh
pretty_name: Cofacts archive for reported messages and crowd-sourced fact-check replies
tags:
- fact-checking
- crowd-sourcing
size_categories:
- 100K<n<1M
extra_gated_prompt: >-
To access this repository, you agree to follow the [Cofacts Data User Agreement](https://github.com/cofacts/opendata/blob/master/LEGAL.md).
This is vital to sustain a crowd-sourced database like Cofacts to attribute the fact-checking community that contributed to this dataset.
欲存取此資料集,需同意[Cofacts 真的假的 資料使用者條款](https://github.com/cofacts/opendata/blob/master/LEGAL.md)。
彰顯查核社群對此資料集之貢獻,對協作型資料庫如 Cofacts 的永續發展至關重要。
It would be great if you share with us who you are and your planned usage of the Cofacts data. Your cooperation is greatly appreciated.
If you have no specific details to share with us, please simply enter "n/a."
若方便的話,希望您可以與 Cofacts 工作小組分享您的單位以及預計會怎麼運用這個資料,感謝您!若不方便,可輸入「n/a」。
extra_gated_fields:
'I agree to follow the Data User Agreement and promise to attribute Cofacts as specified 我同意遵守資料使用者條款並承諾按規定彰顯 Cofacts': checkbox
'Anything to share with us 有什麼想要與我們分享的嗎': text
configs:
- config_name: analytics
data_files: analytics.csv.zip
- config_name: article_categories
data_files: article_categories.csv.zip
- config_name: article_hyperlinks
data_files: article_hyperlinks.csv.zip
lineterminator: |+
- config_name: article_replies
data_files: article_replies.csv.zip
- config_name: article_reply_feedbacks
data_files: article_reply_feedbacks.csv.zip
lineterminator: |+
- config_name: articles
data_files: articles.csv.zip
lineterminator: |+
default: true
- config_name: categories
data_files: categories.csv.zip
lineterminator: |+
- config_name: replies
data_files: replies.csv.zip
lineterminator: |+
- config_name: reply_hyperlinks
data_files: reply_hyperlinks.csv.zip
lineterminator: |+
- config_name: reply_requests
data_files: reply_requests.csv.zip
lineterminator: |+
- config_name: anonymized_users
data_files: anonymized_users.csv.zip
lineterminator: |+
task_categories:
- text-classification
- question-answering
---
# Cofacts Archive for Reported Messages and Crowd-Sourced Fact-Check Replies
[](https://colab.research.google.com/drive/1qdE-OMJTi6ZO68J6KdzGdxNdheW4ct6T?usp=sharing)
The Cofacts dataset encompasses instant messages that have been reported by users of the [Cofacts chatbot](https://line.me/R/ti/p/@cofacts) and the replies provided by the [Cofacts crowd-sourced fact-checking community](https://www.facebook.com/groups/cofacts/).
## Attribution to the Community
This dataset is a result of contributions from both Cofacts LINE chatbot users and the community fact checkers.
To appropriately attribute their efforts, please adhere to the rules outlined in the [Cofacts 真的假的 資料使用者條款 (Cofacts Data User Agreement)](https://github.com/cofacts/opendata/blob/master/LEGAL.md).
Unless stated otherwise, when redistributing Cofacts data outside the LINE application, the attribution specified by the Cofacts Working Group is as follows:
> This data by Cofacts message reporting chatbot and crowd-sourced fact-checking community is licensed under CC BY-SA 4.0. To provide more info, please visit Cofacts LINE bot https://line.me/ti/p/@cofacts
除非以其他方式議定,否則 Cofacts 真的假的工作小組,針對在 LINE 之外的地方散布的 Cofacts 所提供資料,所指定的中文顯名聲明為:
> 本編輯資料取自「Cofacts 真的假的」訊息回報機器人與查證協作社群,採 CC BY-SA 4.0 授權提供。若欲補充資訊請訪問 Cofacts LINE bot https://line.me/ti/p/@cofacts
For more detailed information, please refer to [Cofacts 真的假的 資料使用者條款](https://github.com/cofacts/opendata/blob/master/LEGAL.md).
## How to Access Cofacts Data
To access Cofacts data, you should first register on Hugging Face and accept the Cofacts Data User Agreement. Afterward, you can preview the data on the Hugging Face website.
You can access Cofacts data through the following methods:
1. Load `cofacts/line-msg-fact-check-tw` with Hugging Face's `load_dataset('Cofacts/line-msg-fact-check-tw', TABLE_NAME)`.
2. Download individual zipped CSV files in the `Files` tab on the Hugging Face website.
If you plan to process the data using Python, `load_dataset()` is the simpler solution.
Please refer to [Example on Google Colab](https://colab.research.google.com/drive/1qdE-OMJTi6ZO68J6KdzGdxNdheW4ct6T?usp=sharing) to get started.
## Data Formats
Cofacts data comprises multiple normalized tables, with some tables containing foreign keys to other tables' IDs.
If you have manually downloaded the data, the tables are distributed as zipped CSV files. These files use `\n` as the line terminator, and quotes are used around multi-line contents.
The [`csv-stringify`](https://www.npmjs.com/package/csv-stringify) library is employed to perform escaping and handle quotes and multi-line contents.
### Fields in All Tables
* `userIdsha` (string) Hashed user identifier.
* `appId` (string) Possible values include:
* `LEGACY_APP`: Articles collected before 2017-03.
* `RUMORS_LINE_BOT`: Articles collected with the current LINE bot client after 2017-03.
These two fields together uniquely identify a user across different CSV files. For example, if one row (reply) in `replies.csv` and another row (feedback) in `article_reply_feedbacks.csv` have identical `userIdsha` and `appId`, it indicates that the reply and the feedback were submitted by the same user.
Also, these fields are commonly seen in multiple tables:
* `status`: The current visibility of this document. Possible values include:
* `NORMAL`: The document is normally visible.
* `DELETED`: The document is deleted by its author. For some entities (tables), deletion is not implemented, and thus does not have such value.
* `BLOCKED`: The document is hidden by Cofacts Working Group. These document are from a blocked user, with `blockedReason` pointing to announcements in [Cofacts Takedown Announcements](https://github.com/cofacts/takedowns).
## Tables and their fields
### `articles`
The instant messages LINE bot users submitted into the database.
| Field | Data type | Description |
| ----------------------- | -------- | ---- |
| `id` | String | |
| `articleType` | Enum string | `TEXT`, `IMAGE`, `VIDEO` or `AUDIO`. |
| `status` | Enum string | `NORMAL` or `BLOCKED`. |
| `text` | Text | The instant message text |
| `normalArticleReplyCount` | Integer | The number of replies are associated to this article, excluding the deleted reply associations. |
| `createdAt` | ISO time string | When the article is submitted to the database. |
| `updatedAt` | ISO time string | Preserved, currently identical to `createdAt` |
| `lastRequestedAt` | ISO time string | The submission time of the last `reply_request` is sent on the article, before the article is replied. |
| `userIdsha256` | String | Author of the article.|
| `appId` | String | |
| `references` | Enum string | Where the message is from. Currently the only possible value is `LINE`. |
### `article_hyperlinks`
Parsed hyperlink contents in each instant messages, parsed using [cofacts/url-resolver](https://github.com/cofacts/url-resolver/).
The data is used in Cofacts system for indexing and retrieving messages.
| Field | Data type | Description |
| ---------------- | -------- | ---- |
| `articleId` | String | |
| `url` | String | The URL string detected in article |
| `normalizedUrl` | String | Canonical URL after normalization process including unfolding shortened URLs |
| `title` | String | Title of the scrapped web content |
Note: Scrapped contents do not belong to Cofacts and are redistributed under research purposes.
The scrapping mechanism is not reliable either.
Researchers may need to implement their own scrapper if content is important in their research.
### `article_categories`
Categories linked to this article.
| Field | Data type | Description |
| ---------------- | ---------- | ---- |
| `articleId` | String | |
| `categoryId` | String |
| `aiConfidence` | Number | Confidence level by AI marking this category. Empty for crowd-sourced labels. |
| `aiModel` . | String | Name of the AI model marking this cateogry. Empty for crowd-sourced labels. |
| `userIdsha256` | String | The person that connected article and category. |
| `appId` . | String | |
| `negativeFeedbackCount` | Integer | Number of `article_category_feedbacks` that has score `-1` |
| `positiveFeedbackCount` | Integer | Number of `article_category_feedbacks` that has score `1` |
| `status` | Enum string | `NORMAL`: The category and article are connected. `DELETED`: The category does not connect to the article anymore. |
| `createdAt` | ISO time string | The time when the reply is connected to the article |
| `updatedAt` | ISO time string | The latest date when the category's status is updated |
### `categories`
| Field | Data type | Description |
| ------------- | --------- | ----------- |
| `id` | String | |
| `title` | String | Name of the category |
| `description` | Text | Definition of the category |
| `createdAt` | ISO time string | |
| `updatedAt` | ISO time string | |
### `article_replies`
Articles and replies are in has-and-belongs-to-many relationship. That is, an article can have multiple replies, and a reply can be connected to multiple similar articles.
`article_replies` is the "join table" between `articles` and `replies`, bringing `articleId` and `replyId` together, along with other useful properties related to this connection between an article and a reply.
One pair of `articleId`, `replyId` will map to exactly one `article_reply`.
| Field | Data type | Description |
| --------------------- | -------- | - |
| `articleId` | String | Relates to `id` field of `articles` |
| `replyId` | String | Relates to `id` field of `replies` |
| `userId` | String | The user connecting the reply with the article |
| `negativeFeedbackCount` | Integer | Number of `article_reply_feedbacks` that has score `-1` |
| `positiveFeedbackCount` | Integer | Number of `article_reply_feedbacks` that has score `1` |
| `replyType` | Enum string | Duplicated from `replies`'s type. |
| `appId` | String | |
| `status` | Enum string | `NORMAL`: The reply and article are connected. `DELETED`: The reply does not connect to the article anymore. `BLOCKED`: It comes from a blocked user. |
| `createdAt` | ISO time string | The time when the reply is connected to the article |
| `updatedAt` | ISO time string | The latest date when the reply's status is updated |
### `replies`
Editor's reply to the article.
| Field | Data type | Description |
| --------- | -------- | - |
| `id` | String | |
| `type` | Enum string | Type of the reply chosen by the editor. `RUMOR`: The article contains rumor. `NOT_RUMOR`: The article contains fact. `OPINIONATED`: The article contains personal opinions. `NOT_ARTICLE`: The article should not be processed by Cofacts. |
| `reference` | Text | For `RUMOR` and `NOT_RUMOR` replies: The reference to support the chosen `type` and `text`. For `OPINIONATED` replies: References containing different perspectives from the `article`. For `NOT_ARTICLE`: empty string. |
| `userId` | String | The editor that authored this reply. |
| `appId` | String | |
| `text` | Text | Reply text writtern by the editor |
| `createdAt` | ISO Time string | When the reply is written |
### `reply_hyperlinks`
Parsed hyperlink contents in reply text and references, parsed using [cofacts/url-resolver](https://github.com/cofacts/url-resolver/).
The data is used in Cofacts system for URL previews.
| Field | Data type | Description |
| ---------------- | -------- | ---- |
| `replyId` | String | |
| `url` | String | The URL string detected in article |
| `normalizedUrl` | String | Canonical URL after normalization process including unfolding shortened URLs |
| `title` | String | Title of the scrapped web content |
Note: Scrapped contents do not belong to Cofacts and are redistributed under research purposes.
The scrapping mechanism implementation is not reliable either.
Researchers may need to implement their own scrapper if content is important in their research.
### `reply_requests`
Before an article is replied, users may submit `reply_requests` to indicate that they want this article to be answered.
When an article is first submitted to the article, an reply request is also created. Any further queries to the same article submits new `reply_requests`.
An user can only submit one reply request to an article.
| Field | Data type | Description |
| --------- | -------- | - |
| `articleId` | String | The target of the request |
| `reason` | Text | The reason why the user wants to submit this reply request |
| `status` | Enum string | `NORMAL` or `BLOCKED`. |
| `positiveFeedbackCount` | Text | Number of editors think the reason is reasonable |
| `negativeFeedbackCount` | Text | Number of editors think the reason is nonsense |
| `createdAt` | ISO Time string | When the reply request is issued |
### `article_reply_feedbacks`
Editors and LINE bot users can express if a reply is useful by submitting `article_reply_feedbacks` toward a `article_reply` with score `1` or `-1`.
The feedback is actually submitted toward an `article_reply`, the connection between an article and a reply. This is because a reply can be connected to multiple articles. A reply that makes sense in one article does not necessarily mean that it is useful in answering another article. Therefore, the feedback count for a reply connecting to different articles are counted separately.
| Field | Data type | Description |
| --------- | -------- | - |
| `articleId` | String | Relates to `articleId` of the target `article_reply` |
| `replyId` | String | Relates to `replyId` of the target `article_reply` |
| `score` | Integer | `1`: Useful. `-1`: Not useful. |
| `comment` | Text | Why the user chooses such score for this article reply |
| `status` | Enum string | `NORMAL` or `BLOCKED`. |
| `createdAt` | ISO Time string | When the feedback is submitted |
### `analytics`
Usage (visit / show) statistics of website and Cofacts LINE bot.
LINE bot data starts from April 2nd, 2018; website data starts from May 3rd, 2017.
| Field | Data type | Description |
| ----------- | --------------- | ----------- |
| `type` | Enum string | Either `article` or `reply` |
| `docId` | String | Article ID or Reply ID that is being visited / shown |
| `date` | ISO Time string | The date of usage, represented by start of the day (0:00:00+08:00) |
| `lineUser` | Integer | The number of LINE users who inspected this article / reply in Cofacts LINE bot in this date. May be empty if no such users |
| `lineVisit` | Integer | The number of times this article / reply is inspected in Cofacts LINE bot in this date. May be empty if no visits |
| `webUser` | Integer | The number of web users who visited this article page (`/article/<docId>`) / reply page (`/reply/<docId>`) in Cofacts website in this date. May be empty if no such users |
| `webVisit` | Integer | The number of page views of this article page (`/article/<docId>`) / reply page (`/reply/<docId>`) in Cofacts website in this date. May be empty if no page views |
### `anonymized_usrs`
The users of Cofacts, including Cofacts chatbot and website users.
| Field | Data type | Description |
| ----------- | --------------- | ----------- |
| `userIdsha256` | String | The ID that is used in other tables to denote the creator of the entity. |
| `appId` | String | Where this user account is registered. `RUMORS_LINE_BOT` is Cofacts official LINE account. Registered user on Cofacts website has empty `appId`. |
| `createdAt` | ISO Time string | The initial registration date for the user. |
| `lastActiveAt` | ISO Time string | The last date the account is active. |
| `blockedReason` | String | If exists, all submission from the user is hidden by Cofacts WG. This field contains the announcement to the reason why Cofacts WG blocks such user. |
## ⚠ [NOTICE] Caveats of using this data ⚠
The methodology we use to collect these data (i.e. [how Cofacts works](https://beta.hackfoldr.org/cofacts/https%253A%252F%252Fhackmd.io%252Fs%252FBJSdbUMpZ))
could have some impact on the data credibility.

Please keep in mind that all data in this dataset are user-generated,
thus is not free from noise and sampling bias coming from these sources:
- The distribution Cofacts' users may not reflect the real distribution of all LINE users in Taiwan.
- Users may not use Cofacts in the same way we want them to be.
Some `articles` may not be actual messages circulating in LINE network.
- `replies` may contain factual error.
All replies should be merely regarded as "responses to the original message (`article`) to provide different point of view".
They are neither the "truth" nor the editor's personal opinion.
- There may also exist malicious users sending garbage `articles` into the database. [(Previous incident reports)](https://hackmd.io/@cofacts/incidents)
- The program to collect data and to generate dataset may contain error.
The dataset may be inaccurate systematically in this way.
Lastly, the dataset is provided without warrenty.
THE DATASET IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATASET OR THE USE OR OTHER DEALINGS IN THE DATASET. | [
-0.6229594349861145,
-0.902679443359375,
0.3006438910961151,
0.2987644076347351,
-0.18986833095550537,
0.1255970597267151,
0.1307525634765625,
-0.40179532766342163,
0.6357825398445129,
0.46187347173690796,
-0.8146907091140747,
-0.7577829957008362,
-0.3885401487350464,
0.2806040942668915,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mask-distilled-one-sec-cv12/chunk_83 | mask-distilled-one-sec-cv12 | 2023-05-19T22:51:18Z | 18 | 0 | null | [
"region:us"
] | 2023-05-19T22:51:18Z | 2023-05-19T22:50:23.000Z | 2023-05-19T22:50:23 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1301973480
num_examples: 255690
download_size: 1325402800
dataset_size: 1301973480
---
# Dataset Card for "chunk_83"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5442756414413452,
-0.3880162239074707,
0.28001460433006287,
0.33180204033851624,
-0.48944994807243347,
-0.0016570101724937558,
0.36171168088912964,
-0.3654719293117523,
1.038957953453064,
0.5993920564651489,
-0.7355473637580872,
-0.5445857644081116,
-0.713550329208374,
-0.22409190237522... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Amite5h/Eurosat-Datast | Amite5h | 2023-05-25T16:39:08Z | 18 | 1 | null | [
"region:us"
] | 2023-05-25T16:39:08Z | 2023-05-25T16:39:00.000Z | 2023-05-25T16:39:00 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AnnualCrop
'1': Forest
'2': HerbaceousVegetation
'3': Highway
'4': Industrial
'5': Pasture
'6': PermanentCrop
'7': Residential
'8': River
'9': SeaLake
splits:
- name: train
num_bytes: 88397609.0
num_examples: 27000
download_size: 88592405
dataset_size: 88397609.0
---
# Dataset Card for "Eurosat-Datast"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8332536220550537,
-0.28695347905158997,
0.3570028245449066,
0.21514075994491577,
-0.3024738132953644,
-0.004359253216534853,
0.16014912724494934,
-0.2315390259027481,
0.8485725522041321,
0.45491406321525574,
-0.8879282474517822,
-0.7970532178878784,
-0.5886266827583313,
-0.1011999398469... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Mutonix/RefGPT-Fact | Mutonix | 2023-05-30T13:33:07Z | 18 | 12 | null | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:apache-2.0",
"arxiv:2305.14994",
"region:us"
] | 2023-05-30T13:33:07Z | 2023-05-26T01:37:53.000Z | 2023-05-26T01:37:53 | ---
license: apache-2.0
dataset_info:
features:
- name: dialogue
dtype: string
- name: reference
dtype: string
- name: language
dtype: string
- name: type
dtype: string
splits:
- name: zh
num_bytes: 180760081
num_examples: 50000
- name: en
num_bytes: 464054853
num_examples: 50000
download_size: 260969665
dataset_size: 644814934
task_categories:
- conversational
language:
- zh
- en
arxiv: https://arxiv.org/abs/2305.14994
size_categories:
- 10K<n<100K
---
# Dataset Card for RefGPT-Fact
## Dataset Description
- **Homepage:**
- **Repository:** [https://github.com/ziliwangnlp/RefGPT](https://github.com/ziliwangnlp/RefGPT)
- **Paper:** [https://arxiv.org/abs/2305.14994](https://arxiv.org/abs/2305.14994)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
<p align="center">
<a href="https://arxiv.org/abs/2305.14994"><b>[Paper] RefGPT</b></a> |
<a href="https://github.com/ziliwangnlp/RefGPT"><b>[Github] RefGPT</b></a>
</p>
RefGPT-Fact is a datasets containing 100k multi-turn dialogues about factual knowledge with 50k English and 50k Chinese. The English version uses the English Wikipedia as the reference and the Chinese version uses the frequently-used Chinese online encyclopedia website, Baidu Baike.
### Supported Tasks and Leaderboards
Chatbot instruction finetuning
### Languages
Chinese, English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Please pay attention that RefGPT Datasets, including RefGPT-Fact and RefGPT-Code, have not undergone manual verification, and as such, their security cannot be strictly guaranteed. Users should be aware that they are responsible for the results generated using this data.
### Discussion of Biases
As the datasets RefGPT-Fact and RefGPT-Code are collected by using the references like Wikipedia and Github repositories, it can not be avoided that the reference itself has factual errors, typos, or bugs and malicious code if it is from Github repositories. The datasets may also reflect the biases of the selected references and GPT-3.5/GPT-4 model
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@misc{yang2023refgpt,
title={RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs},
author={Dongjie Yang and Ruifeng Yuan and YuanTao Fan and YiFei Yang and Zili Wang and Shusen Wang and Hai Zhao},
year={2023},
eprint={2305.14994},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
[More Information Needed] | [
-0.34281203150749207,
-0.6249409914016724,
0.20720277726650238,
0.1628769487142563,
-0.2936669588088989,
-0.25302568078041077,
-0.31476667523384094,
-0.35485440492630005,
0.09451903402805328,
0.3737768232822418,
-0.6075901985168457,
-0.5549765825271606,
-0.45134449005126953,
0.062908358871... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distilled-one-sec-cv12-each-chunk-uniq/chunk_194 | distilled-one-sec-cv12-each-chunk-uniq | 2023-05-29T03:04:45Z | 18 | 0 | null | [
"region:us"
] | 2023-05-29T03:04:45Z | 2023-05-29T03:03:49.000Z | 2023-05-29T03:03:49 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1016130868.0
num_examples: 197999
download_size: 1031967918
dataset_size: 1016130868.0
---
# Dataset Card for "chunk_194"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6108604073524475,
-0.35437971353530884,
0.31340140104293823,
0.4797667860984802,
-0.4323829412460327,
0.04524795711040497,
0.2532271146774292,
-0.2866557240486145,
1.0287667512893677,
0.469714492559433,
-0.8169521689414978,
-0.429132342338562,
-0.6726595163345337,
-0.20330552756786346,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CHENHJDJSD/example | CHENHJDJSD | 2023-06-02T03:47:12Z | 18 | 0 | null | [
"region:us"
] | 2023-06-02T03:47:12Z | 2023-06-02T03:46:41.000Z | 2023-06-02T03:46:41 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 75848559.0
num_examples: 185
download_size: 75853693
dataset_size: 75848559.0
---
# Dataset Card for "example"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5439295768737793,
-0.464952677488327,
0.17711974680423737,
0.019461803138256073,
-0.38867178559303284,
-0.2816811800003052,
0.3126085102558136,
-0.03947775065898895,
0.8422475457191467,
0.42451953887939453,
-0.8564953207969666,
-0.7760323882102966,
-0.41567859053611755,
-0.2424716800451... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
emad12/stock_tweets_sentiment | emad12 | 2023-06-04T09:48:20Z | 18 | 3 | null | [
"region:us"
] | 2023-06-04T09:48:20Z | 2023-06-02T09:10:31.000Z | 2023-06-02T09:10:31 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: post_date
dtype: string
- name: tweet
dtype: string
- name: sentiment
dtype: int64
- name: ticker_symbol
dtype: string
- name: tweet_cleaned
dtype: string
- name: __index_level_0__
dtype: int64
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 321710487
num_examples: 96000
- name: test
num_bytes: 80421371
num_examples: 24000
download_size: 32053237
dataset_size: 402131858
---
# Dataset Card for "stock_tweets_sentiment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.38870474696159363,
-0.11936680972576141,
-0.02876405417919159,
0.5543619990348816,
-0.47017717361450195,
0.39361152052879333,
0.08965659141540527,
0.14814241230487823,
1.110952615737915,
0.18326495587825775,
-0.850376307964325,
-1.0554065704345703,
-0.8148629665374756,
-0.51479697227478... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Salama1429/tarteel-ai-everyayah-Quran | Salama1429 | 2023-06-07T14:17:32Z | 18 | 2 | tarteel-everyayah | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"license:mit",
"region:us"
] | 2023-06-07T14:17:32Z | 2023-06-07T07:15:22.000Z | 2023-06-07T07:15:22 | ---
pretty_name: Tarteel AI - EveryAyah Dataset
dataset_info:
features:
- name: audio
dtype: audio
- name: duration
dtype: float64
- name: text
dtype: string
- name: reciter
dtype: string
splits:
- name: train
num_bytes: 262627688145.3
num_examples: 187785
- name: test
num_bytes: 25156009734.72
num_examples: 23473
- name: validation
num_bytes: 23426886730.218
num_examples: 23474
download_size: 117190597305
dataset_size: 311210584610.23804
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ar
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: tarteel-everyayah
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
train-eval-index:
- config: clean
task: automatic-speech-recognition
task_id: speech_recognition
splits:
train_split: train
eval_split: test
validation_split: validation
col_mapping:
audio: audio
text: text
reciter: text
metrics:
- type: wer
name: WER
- type: cer
name: CER
---
﷽
# Dataset Card for Tarteel AI's EveryAyah Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Tarteel AI](https://www.tarteel.ai/)
- **Repository:** [Needs More Information]
- **Point of Contact:** [Mohamed Saad Ibn Seddik](mailto:ms.ibnseddik@tarteel.ai)
### Dataset Summary
This dataset is a collection of Quranic verses and their transcriptions, with diacritization, by different reciters.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises the audio file `audio`, and its transcription called `text`.
The `duration` is in seconds, and the author is `reciter`.
An example from the dataset is:
```
{
'audio': {
'path': None,
'array': array([ 0. , 0. , 0. , ..., -0.00057983,
-0.00085449, -0.00061035]),
'sampling_rate': 16000
},
'duration': 6.478375,
'text': 'بِسْمِ اللَّهِ الرَّحْمَنِ الرَّحِيمِ',
'reciter': 'abdulsamad'
}
```
### Length:
Training:
Total duration: 2985111.2642479446 seconds
Total duration: 49751.85440413241 minutes
Total duration: 829.1975734022068 hours
Validation:
Total duration: 372720.43139099434 seconds
Total duration: 6212.007189849905 minutes
Total duration: 103.5334531641651 hours
Test:
Total duration: 375509.96909399604 seconds
Total duration: 6258.499484899934 minutes
Total duration: 104.30832474833224 hours
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: The transcription of the audio file.
- duration: The duration of the audio file.
- reciter: The reciter of the verses.
### Data Splits
| | Train | Test | Validation |
| ----- | ----- | ---- | ---------- |
| dataset | 187785 | 23473 | 23474 |
### reciters
- reciters_count: 36
- reciters: {'abdul_basit',
'abdullah_basfar',
'abdullah_matroud',
'abdulsamad',
'abdurrahmaan_as-sudais',
'abu_bakr_ash-shaatree',
'ahmed_ibn_ali_al_ajamy',
'ahmed_neana',
'akram_alalaqimy',
'alafasy',
'ali_hajjaj_alsuesy',
'aziz_alili',
'fares_abbad',
'ghamadi',
'hani_rifai',
'husary',
'karim_mansoori',
'khaalid_abdullaah_al-qahtaanee',
'khalefa_al_tunaiji',
'maher_al_muaiqly',
'mahmoud_ali_al_banna',
'menshawi',
'minshawi',
'mohammad_al_tablaway',
'muhammad_abdulkareem',
'muhammad_ayyoub',
'muhammad_jibreel',
'muhsin_al_qasim',
'mustafa_ismail',
'nasser_alqatami',
'parhizgar',
'sahl_yassin',
'salaah_abdulrahman_bukhatir',
'saood_ash-shuraym',
'yaser_salamah',
'yasser_ad-dussary'}
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
```
### Contributions
This dataset was created by:
| [
-0.48818665742874146,
-0.5069720149040222,
0.07316521555185318,
0.36593613028526306,
-0.4837208390235901,
0.11665084958076477,
-0.2851870656013489,
-0.20300596952438354,
0.4843340218067169,
0.45448383688926697,
-0.6882842779159546,
-1.1583551168441772,
-0.7409611344337463,
0.30397409200668... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wtcherr/unsplash_20k | wtcherr | 2023-06-11T23:49:45Z | 18 | 0 | null | [
"region:us"
] | 2023-06-11T23:49:45Z | 2023-06-11T23:46:08.000Z | 2023-06-11T23:46:08 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 2560499324.351
num_examples: 19999
download_size: 440556200
dataset_size: 2560499324.351
---
# Dataset Card for "unsplash_20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5838398933410645,
-0.14812518656253815,
-0.13866330683231354,
0.45948275923728943,
-0.4253848195075989,
0.36512431502342224,
0.08885879069566727,
-0.23654009401798248,
0.9398912787437439,
0.5936480760574341,
-0.8339673280715942,
-0.8169298768043518,
-0.6401305198669434,
-0.1578139662742... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Patt/ReCoRD_TH | Patt | 2023-06-14T16:50:48Z | 18 | 0 | null | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | 2023-06-14T16:50:48Z | 2023-06-14T16:36:15.000Z | 2023-06-14T16:36:15 | ---
task_categories:
- text-classification
language:
- en
- th
---
# Dataset Card for ReCoRD_TH
### Dataset Description
This dataset is Thai translated version of [ReCoRD](https://huggingface.co/datasets/super_glue/viewer/record) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation. | [
-0.06309957802295685,
-0.48111674189567566,
-0.07819576561450958,
0.3040338158607483,
-0.6430198550224304,
0.0014064164133742452,
-0.21128146350383759,
-0.11126432567834854,
0.6197904944419861,
0.5553543567657471,
-0.6396071910858154,
-0.950309693813324,
-0.5916363000869751,
0.183153554797... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LangChainDatasets/langchain-howto-queries | LangChainDatasets | 2023-06-25T00:40:36Z | 18 | 1 | null | [
"region:us"
] | 2023-06-25T00:40:36Z | 2023-06-25T00:40:35.000Z | 2023-06-25T00:40:35 | ---
dataset_info:
features:
- name: inputs
dtype: string
splits:
- name: train
num_bytes: 3419
num_examples: 50
download_size: 2769
dataset_size: 3419
---
# Dataset Card for "langchain-howto-queries"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5268210768699646,
-0.41629743576049805,
0.16041533648967743,
0.141851544380188,
-0.14395825564861298,
0.020482851192355156,
0.00282569183036685,
-0.30546653270721436,
1.012075662612915,
0.8585782051086426,
-0.7479135990142822,
-0.9617356061935425,
-0.364879310131073,
-0.2335413992404937... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gabeorlanski/bc-mbpp | gabeorlanski | 2023-07-21T22:03:56Z | 18 | 0 | null | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|mbpp",
"language:en",
"license:apache-2.0",
"code",
"arxiv:2302.01973",
"arxiv:2108.07732",
"region:us"
] | 2023-07-21T22:03:56Z | 2023-06-25T17:09:12.000Z | 2023-06-25T17:09:12 | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- code
pretty_name: BabelCode MBPP
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|mbpp
---
# Dataset Card for BabelCode MBPP
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/google-research/babelcode)
- **Paper:** [Measuring The Impact Of Programming Language Distribution](https://arxiv.org/abs/2302.01973)
### How To Use This Dataset
To use this dataset, you can either use the original [BabelCode Repo](https://github.com/google-research/babelcode), or you can use the [`bc_eval` Metric](https://huggingface.co/spaces/gabeorlanski/bc_eval).
### Dataset Summary
The BabelCode-MBPP (BC-MBPP) dataset converts the [MBPP dataset released by Google](https://arxiv.org/abs/2108.07732) to 16 programming languages.
### Supported Tasks and Leaderboards
### Languages
BC-MBPP supports:
* C++
* C#
* Dart
* Go
* Haskell
* Java
* Javascript
* Julia
* Kotlin
* Lua
* PHP
* Python
* R
* Rust
* Scala
* TypeScript
## Dataset Structure
```python
>>> from datasets import load_dataset
>>> load_dataset("gabeorlanski/bc-mbpp")
DatasetDict({
train: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'solution', 'question_info'],
num_rows: 5308
})
test: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'solution', 'question_info'],
num_rows: 6989
})
validation: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'solution', 'question_info'],
num_rows: 1216
})
prompt: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'solution', 'question_info'],
num_rows: 160
})
})
```
### Data Fields
- `qid`: The question ID used for running tests.
- `title`: The title of the question.
- `language`: The programming language of the example.
- `text`: The description of the problem.
- `signature`: The signature for the problem.
- `signature_with_docstring`: The signature with the adequately formatted docstring for the given problem.
- `arguments`: The arguments of the problem.
- `solution`: The solution in Python.
- `question_info`: The dict of information used for executing predictions. It has the keys:
- `test_code`: The raw testing script used in the language. If you want to use this, replace `PLACEHOLDER_FN_NAME` (and `PLACEHOLDER_CLS_NAME` if needed) with the corresponding entry points. Next, replace `PLACEHOLDER_CODE_BODY` with the postprocessed prediction.
- `test_list`: The raw json line of the list of tests for the problem. To load them, use `json.loads`
- `test_case_ids`: The list of test case ids for the problem. These are used to determine if a prediction passes or not.
- `entry_fn_name`: The function's name to use an entry point.
- `entry_cls_name`: The class name to use an entry point.
- `commands`: The commands used to execute the prediction. Includes a `__FILENAME__` hole that is replaced with the filename.
- `timeouts`: The default timeouts for each command.
- `extension`: The extension for the prediction file.
**NOTE:** If you want to use a different function name (or class name for languages that require class names) for the prediction, you must update the `entry_fn_name` and `entry_cls_name` accordingly. For example, if you have the original question with `entry_fn_name` of `add`, but want to change it to `f`, you must update `ds["question_info"]["entry_fn_name"]` to `f`:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("gabeorlanski/bc-mbpp")['test']
>>> # The original entry_fn_name
>>> ds[0]['question_info']['entry_fn_name']
removeOcc
>>> # You MUST update the corresponding entry_fn_name
>>> ds[0]['question_info']['entry_fn_name'] = 'f'
>>> ds[0]['question_info']['entry_fn_name']
f
```
## Dataset Creation
See section 2 of the [BabelCode Paper](https://arxiv.org/abs/2302.01973) to learn more about how the datasets are translated.
Information on how the original MBPP was curated is located [here](https://huggingface.co/datasets/mbpp).
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@article{Austin2021ProgramSW,
title={Program Synthesis with Large Language Models},
author={Jacob Austin and Augustus Odena and Maxwell Nye and Maarten Bosma and Henryk Michalewski and David Dohan and Ellen Jiang and Carrie J. Cai and Michael Terry and Quoc V. Le and Charles Sutton},
journal={ArXiv},
year={2021},
volume={abs/2108.07732}
}
``` | [
-0.5489056706428528,
-0.5780572891235352,
0.20220749080181122,
0.38007694482803345,
0.05704395845532417,
-0.1388424038887024,
-0.3220442831516266,
-0.29944220185279846,
0.2313593029975891,
0.38915717601776123,
-0.38917747139930725,
-0.599943995475769,
-0.5711773037910461,
0.090793177485466... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FreedomIntelligence/alpaca-gpt4-arabic | FreedomIntelligence | 2023-08-06T08:07:51Z | 18 | 3 | null | [
"license:apache-2.0",
"region:us"
] | 2023-08-06T08:07:51Z | 2023-06-26T08:17:14.000Z | 2023-06-26T08:17:14 | ---
license: apache-2.0
---
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | [
-0.40414297580718994,
-0.30496302247047424,
-0.004310851916670799,
0.28069862723350525,
-0.06412237882614136,
0.058366622775793076,
-0.2765141725540161,
-0.43211790919303894,
0.4121001660823822,
0.483478307723999,
-0.9163603782653809,
-0.4689486026763916,
-0.18480102717876434,
0.3091475665... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nRuaif/OpenOrca-GPT3.5 | nRuaif | 2023-07-03T10:52:16Z | 18 | 0 | null | [
"region:us"
] | 2023-07-03T10:52:16Z | 2023-07-02T11:55:39.000Z | 2023-07-02T11:55:39 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
santoshtyss/us_contracts | santoshtyss | 2023-07-03T21:02:57Z | 18 | 1 | null | [
"region:us"
] | 2023-07-03T21:02:57Z | 2023-07-03T17:30:43.000Z | 2023-07-03T17:30:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bias-amplified-splits/qqp | bias-amplified-splits | 2023-07-04T11:47:36Z | 18 | 0 | null | [
"task_categories:text-classification",
"language:en",
"license:cc-by-4.0",
"arxiv:2305.18917",
"arxiv:1804.07461",
"region:us"
] | 2023-07-04T11:47:36Z | 2023-07-03T21:05:01.000Z | 2023-07-03T21:05:01 | ---
license: cc-by-4.0
dataset_info:
- config_name: minority_examples
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_duplicate
'1': duplicate
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 42391456
num_examples: 297735
- name: train.anti_biased
num_bytes: 8509364
num_examples: 66111
- name: validation.biased
num_bytes: 4698206
num_examples: 32968
- name: validation.anti_biased
num_bytes: 955548
num_examples: 7462
download_size: 70726976
dataset_size: 56554574
- config_name: partial_input
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_duplicate
'1': duplicate
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 42788212
num_examples: 297735
- name: train.anti_biased
num_bytes: 8112608
num_examples: 66111
- name: validation.biased
num_bytes: 4712327
num_examples: 33084
- name: validation.anti_biased
num_bytes: 941427
num_examples: 7346
download_size: 70726976
dataset_size: 56554574
task_categories:
- text-classification
language:
- en
pretty_name: Quora Questions Pairs
---
# Dataset Card for Bias-amplified Splits for QQP
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [GLUE](https://arxiv.org/abs/1804.07461)
### Dataset Summary
Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.
Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.
Here we apply our framework to the Quora Question Pairs dataset (QQP), a dataset composed of question pairs where the task is to determine if the questions are paraphrases of each other (have the same meaning).
Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.
#### Evaluation Results (DeBERTa-large)
##### For splits based on minority examples:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 93.0 | 77.6 |
| Biased training split | 87.0 | 36.8 |
##### For splits based on partial-input model:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 93.0 | 81.3 |
| Biased training split | 90.3 | 63.9 |
#### Loading the Data
```
from datasets import load_dataset
# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/qqp", "minority_examples")
# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['validation.anti_biased']
```
## Dataset Structure
### Data Instances
Data instances are taken directly from QQP (GLUE version), and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
"idx": 56,
"question1": "How do I buy used car in India?",
"question2": "Which used car should I buy in India?",
"label": 0
}
```
### Data Fields
- `idx`: unique identifier for the example within its original data splits (e.g., validation set)
- `question1`: a question asked on Quora
- `question2`: a question asked on Quora
- `label`: one of `0` and `1` (`not duplicate` and `duplicate`)
### Data Splits
Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.
Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.
#### Minority Examples
| Dataset Split | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased | 297735 |
| Train - anti-biased | 66111 |
| Validation - biased | 32968 |
| Validation - anti-biased | 7462 |
#### Partial-input Baselines
| Dataset Split | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased | 297735 |
| Train - anti-biased | 66111 |
| Validation - biased | 33084 |
| Validation - anti-biased | 7346 |
## Dataset Creation
### Curation Rationale
NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.
### Annotations
#### Annotation process
No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.
## Considerations for Using the Data
### Social Impact of Dataset
Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.
### Discussion of Biases
We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.
## Additional Information
### Dataset Curators
Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).
QQP data was released by Quora and released under the GLUE benchmark.
### Citation Information
```
@misc{reif2023fighting,
title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
author = "Yuval Reif and Roy Schwartz",
month = may,
year = "2023",
url = "https://arxiv.org/pdf/2305.18917",
}
```
Source dataset:
```
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
``` | [
-0.6896111965179443,
-0.676742434501648,
0.057099513709545135,
-0.0073424456641077995,
-0.40143635869026184,
-0.06886560469865799,
0.0011478582164272666,
-0.2654758393764496,
0.222502201795578,
0.27829742431640625,
-0.7247430086135864,
-0.39304110407829285,
-0.5650832653045654,
-0.24578316... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DynamicSuperb/SpeechTextMatching_LibriSpeech-TestClean | DynamicSuperb | 2023-08-01T06:43:16Z | 18 | 0 | null | [
"region:us"
] | 2023-08-01T06:43:16Z | 2023-07-09T15:52:53.000Z | 2023-07-09T15:52:53 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: instruction
dtype: string
- name: label
dtype: string
- name: transcription
dtype: string
splits:
- name: test
num_bytes: 372177496.46
num_examples: 2620
download_size: 350698434
dataset_size: 372177496.46
---
# Dataset Card for "speechTextMatching_Librispeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4874778985977173,
-0.2835647165775299,
0.059431351721286774,
0.3007906377315521,
-0.050972502678632736,
-0.08327087014913559,
-0.09848380088806152,
-0.23394769430160522,
0.8949525356292725,
0.4278896450996399,
-0.9388992786407471,
-0.7859598398208618,
-0.5453968644142151,
-0.45366612076... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Aaryan333/fer2013_train_publicTest_privateTest | Aaryan333 | 2023-07-09T22:00:12Z | 18 | 0 | null | [
"region:us"
] | 2023-07-09T22:00:12Z | 2023-07-09T22:00:01.000Z | 2023-07-09T22:00:01 | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': Angry
'1': Disgust
'2': Fear
'3': Happy
'4': Sad
'5': Surprise
'6': Neutral
- name: image
dtype: image
splits:
- name: train
num_bytes: 106750555.375
num_examples: 28709
- name: publicTest
num_bytes: 13383908.375
num_examples: 3589
- name: privateTest
num_bytes: 13384809.375
num_examples: 3589
download_size: 133185182
dataset_size: 133519273.125
---
# Dataset Card for "fer2013_train_publicTest_privateTest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6219202280044556,
-0.151574045419693,
0.24314244091510773,
0.6408222913742065,
0.06560302525758743,
-0.1694069802761078,
0.24856343865394592,
0.05629626661539078,
0.340442419052124,
0.36007213592529297,
-0.7363564372062683,
-0.6134357452392578,
-0.41546696424484253,
-0.01221804507076740... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jstet/quotes-500k | jstet | 2023-07-12T15:14:13Z | 18 | 0 | null | [
"region:us"
] | 2023-07-12T15:14:13Z | 2023-07-12T12:15:40.000Z | 2023-07-12T12:15:40 | Taken from Kaggle: https://www.kaggle.com/datasets/manann/quotes-500k?resource=download
It was upload there from this repo: https://github.com/ShivaliGoel/Quotes-500K
Paper:
Goel, S., Madhok, R., & Garg, S. (2018). Proposing Contextually Relevant Quotes for Images. Advances in Information Retrieval. Springer. doi: 10.1007/978-3-319-76941-7_49 | [
-0.2253638058900833,
-0.6508089900016785,
0.5232416987419128,
0.04898209497332573,
-0.41639408469200134,
-0.47094735503196716,
-0.05737790837883949,
-0.5968301296234131,
0.30758431553840637,
0.7154232859611511,
-0.5387181639671326,
-0.1367509514093399,
-0.35404518246650696,
0.1439879089593... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
9wimu9/ada_derana_sinhala | 9wimu9 | 2023-07-13T17:12:52Z | 18 | 1 | null | [
"region:us"
] | 2023-07-13T17:12:52Z | 2023-07-13T15:12:05.000Z | 2023-07-13T15:12:05 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: heading
dtype: string
- name: content
sequence: string
splits:
- name: train
num_bytes: 418940569
num_examples: 170420
download_size: 159392910
dataset_size: 418940569
---
# Dataset Card for "ada_derana_sinhala"
This dataset includes Ada derana sinhala web site news articles from January 6, 2010 to July 11, 2023. you can visit the original web page by using the "id" | [
-0.1805328130722046,
-0.6828121542930603,
0.21107397973537445,
0.07719559967517853,
-0.534705638885498,
-0.21408921480178833,
0.2253275215625763,
-0.5225255489349365,
0.5961042046546936,
0.29564762115478516,
-0.6685406565666199,
-0.7582036256790161,
-0.12456537038087845,
0.2803734242916107... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/popqa-parents-lying | atmallen | 2023-07-19T15:57:51Z | 18 | 0 | null | [
"region:us"
] | 2023-07-19T15:57:51Z | 2023-07-19T00:40:17.000Z | 2023-07-19T00:40:17 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: true_label
dtype: int64
splits:
- name: train
num_bytes: 3223356
num_examples: 31936
- name: validation
num_bytes: 695352
num_examples: 6848
- name: test
num_bytes: 700442
num_examples: 6880
download_size: 750525
dataset_size: 4619150
---
# Dataset Card for "popqa-parents-lying"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5243510603904724,
-0.20978711545467377,
0.08504532277584076,
0.25233423709869385,
0.04854416102170944,
0.02915000542998314,
0.4460778534412384,
-0.013006974011659622,
0.4512770473957062,
0.47135666012763977,
-1.290326476097107,
-0.4352415204048157,
-0.46045976877212524,
-0.4542910158634... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
diffusers-parti-prompts/sdxl-1.0-refiner | diffusers-parti-prompts | 2023-07-30T16:22:20Z | 18 | 0 | null | [
"region:us"
] | 2023-07-30T16:22:20Z | 2023-07-30T13:33:05.000Z | 2023-07-30T13:33:05 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Challenge
dtype: string
- name: Note
dtype: string
- name: images
dtype: image
- name: model_name
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 189993385.856
num_examples: 1632
download_size: 189456016
dataset_size: 189993385.856
---
# Dataset Card for "sdxl-1.0-refiner"
Dataset was generated using the code below:
```python
import torch
from datasets import Dataset, Features
from datasets import Image as ImageFeature
from datasets import Value, load_dataset
from diffusers import DDIMScheduler, DiffusionPipeline
import PIL
def main():
print("Loading dataset...")
parti_prompts = load_dataset("nateraw/parti-prompts", split="train")
print("Loading pipeline...")
ckpt_id = "stabilityai/stable-diffusion-xl-base-1.0"
refiner_ckpt_id = "stabilityai/stable-diffusion-xl-refiner-1.0"
pipe = DiffusionPipeline.from_pretrained(
ckpt_id, torch_dtype=torch.float16, use_auth_token=True
).to("cuda")
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
pipe.set_progress_bar_config(disable=True)
refiner = DiffusionPipeline.from_pretrained(
refiner_ckpt_id,
torch_dtype=torch.float16,
use_auth_token=True
).to("cuda")
refiner.scheduler = DDIMScheduler.from_config(refiner.scheduler.config)
refiner.set_progress_bar_config(disable=True)
seed = 0
generator = torch.Generator("cuda").manual_seed(seed)
print("Running inference...")
main_dict = {}
for i in range(len(parti_prompts)):
sample = parti_prompts[i]
prompt = sample["Prompt"]
latent = pipe(
prompt,
generator=generator,
num_inference_steps=100,
guidance_scale=7.5,
output_type="latent",
).images[0]
image_refined = refiner(
prompt=prompt,
image=latent[None, :],
generator=generator,
num_inference_steps=100,
guidance_scale=7.5,
).images[0]
image = image_refined.resize((256, 256), resample=PIL.Image.Resampling.LANCZOS)
img_path = f"sd_xl_{i}.png"
image.save(img_path)
main_dict.update(
{
prompt: {
"img_path": img_path,
"Category": sample["Category"],
"Challenge": sample["Challenge"],
"Note": sample["Note"],
"model_name": ckpt_id,
"seed": seed,
}
}
)
def generation_fn():
for prompt in main_dict:
prompt_entry = main_dict[prompt]
yield {
"Prompt": prompt,
"Category": prompt_entry["Category"],
"Challenge": prompt_entry["Challenge"],
"Note": prompt_entry["Note"],
"images": {"path": prompt_entry["img_path"]},
"model_name": prompt_entry["model_name"],
"seed": prompt_entry["seed"],
}
print("Preparing HF dataset...")
ds = Dataset.from_generator(
generation_fn,
features=Features(
Prompt=Value("string"),
Category=Value("string"),
Challenge=Value("string"),
Note=Value("string"),
images=ImageFeature(),
model_name=Value("string"),
seed=Value("int64"),
),
)
ds_id = "diffusers-parti-prompts/sdxl-1.0-refiner"
ds.push_to_hub(ds_id)
if __name__ == "__main__":
main()
``` | [
-0.5134124755859375,
-0.34878009557724,
0.519457995891571,
0.12062132358551025,
-0.2671433985233307,
-0.1817718893289566,
0.05876440554857254,
0.09444189816713333,
-0.14920693635940552,
0.6100714802742004,
-0.8774027824401855,
-0.6223871111869812,
-0.531038224697113,
0.00885231513530016,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
giuid/qrecc_raw_context | giuid | 2023-08-07T10:58:53Z | 18 | 0 | null | [
"region:us"
] | 2023-08-07T10:58:53Z | 2023-08-03T15:36:03.000Z | 2023-08-03T15:36:03 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nampdn-ai/mini-en | nampdn-ai | 2023-08-27T00:22:30Z | 18 | 6 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"source_datasets:tiiuae/falcon-refinedweb",
"source_datasets:JeanKaddour/minipile",
"language:en",
"license:apache-2.0",
"arxiv:2306.01116",
"arxiv:2304.08442",
"region:us"
] | 2023-08-27T00:22:30Z | 2023-08-15T11:56:29.000Z | 2023-08-15T11:56:29 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
pretty_name: Tiny English
size_categories:
- 100K<n<1M
source_datasets:
- tiiuae/falcon-refinedweb
- JeanKaddour/minipile
---
# Tiny English
A collection of short texts that have been curated for long-term human value. The texts in this dataset have been filtered from the [falcon-refinedweb](https://arxiv.org/abs/2306.01116) and [minipile](https://arxiv.org/abs/2304.08442) datasets to ensure better quality and tiny in size.
The tiny-en dataset is concise and small in size, yet highly diverse, making it an excellent resource for training natural language processing models. Despite its compact size, the dataset offers a wide range of content that has been carefully selected for its long-term human value. This makes it an ideal choice for researchers and developers who want to train their models on a diverse and high-quality dataset without having to deal with the challenges of working with large amounts of data.
The short length of the texts in the tiny-en dataset makes it easy to work with, while the long-term human value of the content ensures that the models trained on this dataset will be able to produce meaningful and relevant results. So, if you’re looking for a concise, small, yet highly diverse dataset for your natural language processing needs, be sure to check out the tiny-en dataset!
Explore the repository and discover the potential of the tiny series datasets for your research and development efforts. I am always looking for ways to improve this dataset and make it even more useful to the community, so please don't hesitate to share your feedback with me. Thank you for your interest in tiny-en! 😊 | [
-0.4830107092857361,
-0.52523273229599,
0.307344526052475,
0.045241642743349075,
-0.25464746356010437,
-0.007424650713801384,
-0.7067662477493286,
-0.6793504357337952,
0.36730635166168213,
0.31133678555488586,
-0.5614829063415527,
-0.3924669921398163,
-0.46228405833244324,
0.45558840036392... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nathansutton/data-science-job-descriptions | nathansutton | 2023-08-15T13:39:56Z | 18 | 0 | null | [
"region:us"
] | 2023-08-15T13:39:56Z | 2023-08-15T13:27:01.000Z | 2023-08-15T13:27:01 | __Data Science Job Descriptions__
These data encompass the title, company, and description of the [outer-join](https://outerjoin.us/remote-data-science-jobs) job board between October 2021 and today.
---
license: wtfpl
task_categories:
- text-classification
- feature-extraction
language:
- en
tags:
- jobs
pretty_name: ds-jobs
size_categories:
- 1K<n<10K
--- | [
-0.3961820602416992,
-0.4790702164173126,
0.5837591886520386,
0.02193620800971985,
-0.3223249614238739,
0.3074302077293396,
0.2886839807033539,
-0.8303028345108032,
0.4143618047237396,
0.633478045463562,
-1.1618412733078003,
-0.8076403737068176,
-0.3248693645000458,
0.24750764667987823,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pretam/hi-kn | Pretam | 2023-08-17T17:36:26Z | 18 | 0 | null | [
"region:us"
] | 2023-08-17T17:36:26Z | 2023-08-17T12:56:03.000Z | 2023-08-17T12:56:03 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ProgramComputer/voxceleb | ProgramComputer | 2023-11-04T21:44:05Z | 18 | 5 | null | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_categories:image-classification",
"task_categories:video-classification",
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"arxiv:1706.08612",
"doi:10.57967/hf/0999",
"region:us"
] | 2023-11-04T21:44:05Z | 2023-08-17T18:57:37.000Z | 2023-08-17T18:57:37 | ---
task_categories:
- automatic-speech-recognition
- audio-classification
- image-classification
- video-classification
size_categories:
- 100K<n<1M
license: cc-by-4.0
datasets:
- voxceleb
- voxceleb2
---
## Dataset Description
- **Homepage:** [VoxCeleb](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/)
This dataset includes both VoxCeleb and VoxCeleb2
# Multipart Zips
Already joined zips for convenience but these specified files are *NOT* part of the original datasets
vox2_mp4_1.zip - vox2_mp4_6.zip
vox2_aac_1.zip - vox2_aac_2.zip
# Joining Zip
```
cat vox1_dev* > vox1_dev_wav.zip
```
```
cat vox2_dev_aac* > vox2_aac.zip
```
```
cat vox2_dev_mp4* > vox2_mp4.zip
```
### Citation Information
```
@article{Nagrani19,
author = "Arsha Nagrani and Joon~Son Chung and Weidi Xie and Andrew Zisserman",
title = "Voxceleb: Large-scale speaker verification in the wild",
journal = "Computer Science and Language",
year = "2019",
publisher = "Elsevier",
}
@inProceedings{Chung18b,
author = "Chung, J.~S. and Nagrani, A. and Zisserman, A.",
title = "VoxCeleb2: Deep Speaker Recognition",
booktitle = "INTERSPEECH",
year = "2018",
}
@article{DBLP:journals/corr/NagraniCZ17,
author = {Arsha Nagrani and
Joon Son Chung and
Andrew Zisserman},
title = {VoxCeleb: a large-scale speaker identification dataset},
journal = {CoRR},
volume = {abs/1706.08612},
year = {2017},
url = {http://arxiv.org/abs/1706.08612},
eprinttype = {arXiv},
eprint = {1706.08612},
timestamp = {Mon, 13 Aug 2018 16:47:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/NagraniCZ17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@ProgramComputer](https://github.com/ProgramComputer) for adding this dataset. | [
-0.5809645056724548,
-0.5558570027351379,
0.09482280164957047,
0.19676198065280914,
-0.03792155906558037,
0.07307323068380356,
-0.4983293116092682,
-0.25205546617507935,
0.15883973240852356,
0.5295292139053345,
-0.6089937090873718,
-0.6607275009155273,
-0.3434807360172272,
0.13324384391307... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_Corianas__Quokka_2.7b | open-llm-leaderboard | 2023-09-18T03:06:10Z | 18 | 0 | null | [
"region:us"
] | 2023-09-18T03:06:10Z | 2023-08-17T22:25:42.000Z | 2023-08-17T22:25:42 | ---
pretty_name: Evaluation run of Corianas/Quokka_2.7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Corianas/Quokka_2.7b](https://huggingface.co/Corianas/Quokka_2.7b) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Corianas__Quokka_2.7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-18T03:05:58.053951](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__Quokka_2.7b/blob/main/results_2023-09-18T03-05-58.053951.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.027055369127516778,\n\
\ \"em_stderr\": 0.0016615386418947858,\n \"f1\": 0.0843078859060403,\n\
\ \"f1_stderr\": 0.0021162612701253174,\n \"acc\": 0.27932236818091244,\n\
\ \"acc_stderr\": 0.007830181847252834\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.027055369127516778,\n \"em_stderr\": 0.0016615386418947858,\n\
\ \"f1\": 0.0843078859060403,\n \"f1_stderr\": 0.0021162612701253174\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0037907505686125853,\n \
\ \"acc_stderr\": 0.0016927007401501802\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5548539857932123,\n \"acc_stderr\": 0.013967662954355487\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Corianas/Quokka_2.7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|arc:challenge|25_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_18T03_05_58.053951
path:
- '**/details_harness|drop|3_2023-09-18T03-05-58.053951.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-18T03-05-58.053951.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_18T03_05_58.053951
path:
- '**/details_harness|gsm8k|5_2023-09-18T03-05-58.053951.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-18T03-05-58.053951.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hellaswag|10_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:58:12.174583.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T15:58:12.174583.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T15:58:12.174583.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_18T03_05_58.053951
path:
- '**/details_harness|winogrande|5_2023-09-18T03-05-58.053951.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-18T03-05-58.053951.parquet'
- config_name: results
data_files:
- split: 2023_07_19T15_58_12.174583
path:
- results_2023-07-19T15:58:12.174583.parquet
- split: 2023_09_18T03_05_58.053951
path:
- results_2023-09-18T03-05-58.053951.parquet
- split: latest
path:
- results_2023-09-18T03-05-58.053951.parquet
---
# Dataset Card for Evaluation run of Corianas/Quokka_2.7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Corianas/Quokka_2.7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Corianas/Quokka_2.7b](https://huggingface.co/Corianas/Quokka_2.7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Corianas__Quokka_2.7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-18T03:05:58.053951](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__Quokka_2.7b/blob/main/results_2023-09-18T03-05-58.053951.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.027055369127516778,
"em_stderr": 0.0016615386418947858,
"f1": 0.0843078859060403,
"f1_stderr": 0.0021162612701253174,
"acc": 0.27932236818091244,
"acc_stderr": 0.007830181847252834
},
"harness|drop|3": {
"em": 0.027055369127516778,
"em_stderr": 0.0016615386418947858,
"f1": 0.0843078859060403,
"f1_stderr": 0.0021162612701253174
},
"harness|gsm8k|5": {
"acc": 0.0037907505686125853,
"acc_stderr": 0.0016927007401501802
},
"harness|winogrande|5": {
"acc": 0.5548539857932123,
"acc_stderr": 0.013967662954355487
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.38100308179855347,
-0.749924898147583,
0.150360569357872,
0.27496564388275146,
-0.19791875779628754,
0.16877448558807373,
-0.2767088711261749,
-0.18598070740699768,
0.4929053783416748,
0.4320153594017029,
-0.7739214897155762,
-0.901749849319458,
-0.6105329394340515,
0.12556087970733643,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_tiiuae__falcon-7b | open-llm-leaderboard | 2023-10-29T13:14:03Z | 18 | 0 | null | [
"region:us"
] | 2023-10-29T13:14:03Z | 2023-08-18T00:12:34.000Z | 2023-08-18T00:12:34 | ---
pretty_name: Evaluation run of tiiuae/falcon-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 122 configuration, each one coresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run (and is used to compute and display the agregated metrics on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tiiuae__falcon-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T10:06:45.584443](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-7b/blob/main/results_2023-09-17T10-06-45.584443.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0010486577181208054,\n\
\ \"em_stderr\": 0.00033145814652193653,\n \"f1\": 0.04824664429530208,\n\
\ \"f1_stderr\": 0.0012232481165562455,\n \"acc\": 0.3751460800288181,\n\
\ \"acc_stderr\": 0.008496930501481662\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.00033145814652193653,\n\
\ \"f1\": 0.04824664429530208,\n \"f1_stderr\": 0.0012232481165562455\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.026535253980288095,\n \
\ \"acc_stderr\": 0.004427045987265165\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7237569060773481,\n \"acc_stderr\": 0.01256681501569816\n\
\ }\n}\n```"
repo_url: https://huggingface.co/tiiuae/falcon-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|arc:challenge|25_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_09T19_07_27.412342
path:
- '**/details_harness|drop|3_2023-09-09T19-07-27.412342.parquet'
- split: 2023_09_17T10_06_45.584443
path:
- '**/details_harness|drop|3_2023-09-17T10-06-45.584443.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T10-06-45.584443.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_09T19_07_27.412342
path:
- '**/details_harness|gsm8k|5_2023-09-09T19-07-27.412342.parquet'
- split: 2023_09_17T10_06_45.584443
path:
- '**/details_harness|gsm8k|5_2023-09-17T10-06-45.584443.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T10-06-45.584443.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hellaswag|10_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T10:51:47.706539.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T10:51:47.706539.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T10:51:47.706539.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_09T19_07_27.412342
path:
- '**/details_harness|winogrande|5_2023-09-09T19-07-27.412342.parquet'
- split: 2023_09_17T10_06_45.584443
path:
- '**/details_harness|winogrande|5_2023-09-17T10-06-45.584443.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T10-06-45.584443.parquet'
- config_name: original_mmlu_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T20:05:31.227903.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_abstract_algebra_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_anatomy_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_astronomy_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_business_ethics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_clinical_knowledge_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_biology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_chemistry_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_computer_science_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_mathematics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_medicine_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_college_physics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_computer_security_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_conceptual_physics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_econometrics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_electrical_engineering_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_elementary_mathematics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_formal_logic_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_global_facts_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_biology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_chemistry_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_computer_science_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_european_history_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_geography_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_government_and_politics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_macroeconomics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_mathematics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_microeconomics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_physics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_psychology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_statistics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_us_history_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_high_school_world_history_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_human_aging_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_human_sexuality_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_international_law_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_jurisprudence_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_logical_fallacies_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_machine_learning_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_management_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:management|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:management|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_marketing_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_medical_genetics_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_miscellaneous_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_moral_disputes_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_moral_scenarios_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_nutrition_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_philosophy_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_prehistory_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_professional_accounting_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_professional_law_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_professional_medicine_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_professional_psychology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_public_relations_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_security_studies_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_sociology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_us_foreign_policy_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_virology_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:virology|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:virology|5_2023-08-28T20:05:31.227903.parquet'
- config_name: original_mmlu_world_religions_5
data_files:
- split: 2023_08_28T20_05_31.227903
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:05:31.227903.parquet'
- split: latest
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:05:31.227903.parquet'
- config_name: results
data_files:
- split: 2023_07_19T10_51_47.706539
path:
- results_2023-07-19T10:51:47.706539.parquet
- split: 2023_08_28T20_05_31.227903
path:
- results_2023-08-28T20:05:31.227903.parquet
- split: 2023_09_09T19_07_27.412342
path:
- results_2023-09-09T19-07-27.412342.parquet
- split: 2023_09_17T10_06_45.584443
path:
- results_2023-09-17T10-06-45.584443.parquet
- split: latest
path:
- results_2023-09-17T10-06-45.584443.parquet
---
# Dataset Card for Evaluation run of tiiuae/falcon-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/tiiuae/falcon-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 122 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_tiiuae__falcon-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T10:06:45.584443](https://huggingface.co/datasets/open-llm-leaderboard/details_tiiuae__falcon-7b/blob/main/results_2023-09-17T10-06-45.584443.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0010486577181208054,
"em_stderr": 0.00033145814652193653,
"f1": 0.04824664429530208,
"f1_stderr": 0.0012232481165562455,
"acc": 0.3751460800288181,
"acc_stderr": 0.008496930501481662
},
"harness|drop|3": {
"em": 0.0010486577181208054,
"em_stderr": 0.00033145814652193653,
"f1": 0.04824664429530208,
"f1_stderr": 0.0012232481165562455
},
"harness|gsm8k|5": {
"acc": 0.026535253980288095,
"acc_stderr": 0.004427045987265165
},
"harness|winogrande|5": {
"acc": 0.7237569060773481,
"acc_stderr": 0.01256681501569816
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.46946537494659424,
-0.6870526671409607,
0.2165789157152176,
0.22691011428833008,
-0.10434943437576294,
0.17462539672851562,
-0.30797305703163147,
-0.16511687636375427,
0.4694337248802185,
0.5286344885826111,
-0.6907863616943359,
-0.924413800239563,
-0.6799955368041992,
0.212535634636878... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
theoldmandthesea/17k_business_book | theoldmandthesea | 2023-08-20T08:14:02Z | 18 | 0 | null | [
"region:us"
] | 2023-08-20T08:14:02Z | 2023-08-20T01:03:38.000Z | 2023-08-20T01:03:38 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.545617401599884,
-0.42588168382644653,
-0.051285725086927414,
0.38739174604415894,
-0.4620097875595093,
0.05422865226864815,
-0.24659410119056702,
-0.2884671688079834,
0.6999505162239075,
0.5781952142715454,
-0.9070088267326355,
-1.1513409614562988,
-0.756676435470581,
0.029052479192614... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TaylorAI/pubmed_commercial | TaylorAI | 2023-08-26T07:32:30Z | 18 | 12 | null | [
"region:us"
] | 2023-08-26T07:32:30Z | 2023-08-23T19:00:38.000Z | 2023-08-23T19:00:38 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
daniilak/vk_groups | daniilak | 2023-08-27T15:21:43Z | 18 | 2 | null | [
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:ru",
"license:cc0-1.0",
"region:us"
] | 2023-08-27T15:21:43Z | 2023-08-25T09:07:05.000Z | 2023-08-25T09:07:05 | ---
license: cc0-1.0
task_categories:
- text-generation
language:
- ru
pretty_name: VK.Groups
size_categories:
- 100M<n<1B
---
### Dataset
The data set contains a list of all public pages (communities or groups) of the social network VKontakte (VK.COM).
The current number is 222,130,000 communities.
The dataset has 25 fields. CSV files are delimited by "\t".
There is also a list of verified groups - 41614 elements
### Fields
Full versions contain the following fields:
["id", "screen_name", "members_count", "name", "type", "verified", "description", "activity", "can_see_all_posts", "city_id", "city_title", "contacts", "country_id", "country_title", "deactivated", "deactivated_message", "deactivated_type", "finish_date", "is_closed", "photo_100", "photo_200", "photo_50", "site", "start_date", "status"]
Minified versions contain the following fields:
[ "id", "members_count", "name", "type", "verified", "activity", "city_id", "country_id", "deactivated", "finish_date", "is_closed", "site"]
Description:
* id - integer Community ID
* screen_name - string Community name
* members_count - string Short address, for example, apiclub
* name - string Community name.
* type - string Community type: group — group; page - public page; event
* verified - integer Information about whether the community has been verified. Possible values: 1 - is; 0 - is not
* description - string Community description text
* activity - string Public theme string. For groups, a string value is returned, whether the group is open or not, and for events, the start date
* can_see_all_posts - integer Information about whether it is allowed to see other people's posts on the community wall. Possible values: 1 - can; 0 - cannot
* city_id - integer id of the city specified in the community information
* city_title - - integer name of the city specified in the community information
* contacts - json-array Information from the contact block of the public page. An array of objects, each of which can contain fields: user_id (integer) — user ID; desc (string) - position; phone (string) — phone number; email (string) — email address
* country_id - integer ID of the country specified in the community information
* country_title - string name of the country specified in the community information
* deactivated - string Returned if the community has been deleted or disabled. Possible values: deleted — the community has been deleted; banned - the community is blocked;
* deactivated_message - string Reason for blocking the community
* deactivated_type - string Returned if the community is deleted or banned, contains deleted or banned
* finish_date - Meeting communities contain the end time of the meeting in unixtime format. For public pages, it contains only start_date — the date of foundation in YYYYMMDD format
* is_closed - integer Whether the community is closed. Possible values: 0 — open; 1 - closed; 2 - private
* photo_100 - string URL of the main photo with a size of 100x100px
* photo_200 - string URL of the main photo in the maximum size
* photo_50 - string URL of the main photo with size 50x50px
* site - string Site address specified in the profile.
* start_date - Meeting communities contain the start time of the meeting in unixtime format. For public pages, it contains only start_date — the date of foundation in YYYYMMDD format
* status - string Community status
### Dataset Creation
The data was scraped through [https://dev.vk.com/ru/method/groups.getById] (VK API Method)
### License
The license for this dataset is public, you can use it in your scientific research, design work and other works. The only condition is the publication of a link to this dataset
## RU
### Набор данных
Набор данных содержит список всех публичных страниц (или, как их называют, сообщества или группы) социальной сети ВКонтакте.
Текущее число составляет 222 130 000 групп.
Датасет имеет 25 полей. В качестве разделителя используется символ табуляции "\t".
Также есть список верифицированных групп - 41614 элементов
### Поля
Полная версия содержит следующие поля:
["id", "screen_name", "members_count", "name", "type", "verified", "description", "activity", "can_see_all_posts", "city_id", "city_title", "contacts", "country_id", "country_title", "deactivated", "deactivated_message", "deactivated_type", "finish_date", "is_closed", "photo_100", "photo_200", "photo_50", "site", "start_date", "status"]
Минифицированная версия:
[ "id", "members_count", "name", "type", "verified", "activity", "city_id", "country_id", "deactivated", "finish_date", "is_closed", "site"]
Подробно:
* id - integer Идентификатор сообщества
* screen_name - string Название сообщества
* members_count - string Короткий адрес, например, apiclub
* name - string Название сообщества.
* type - string Тип сообщества: group — группа; page — публичная страница; event — мероприятие
* verified - integer Информация о том, верифицировано ли сообщество. Возможные значения: 1 — является; 0 — не является
* description - string Текст описания сообщества
* activity - string Строка тематики паблика. У групп возвращается строковое значение, открыта ли группа или нет, а у событий дата начала
* can_see_all_posts - integer Информация о том, разрешено ли видеть чужие записи на стене сообщества. Возможные значения: 1 — может; 0 — не может
* city_id - integer идентификатор города, указанный в информации о сообществе
* city_title - - integer название города, указанный в информации о сообществе
* contacts - json-array Информация из блока контактов публичной страницы. Массив объектов, каждый из которых может содержать поля: user_id (integer) — идентификатор пользователя; desc (string) — должность; phone (string) — номер телефона; email (string) — адрес e-mail
* country_id - integer идентификатор страны, указанной в информации о сообществе
* country_title - string название страны, указанной в информации о сообществе
* deactivated - string Возвращается в случае, если сообщество удалено или заблокировано. Возможные значения: deleted — сообщество удалено; banned — сообщество заблокировано;
* deactivated_message - string Причина блокировки сообщества
* deactivated_type - string Возвращается, если сообщество удалено или заблокировано, содержит значение deleted или banned
* finish_date - Сообщества-встречи содержат время конца встречи в формате unixtime. Для публичных страниц содержит только start_date — дата основания в формате YYYYMMDD
* is_closed - integer Является ли сообщество закрытым. Возможные значения: 0 — открытое; 1 — закрытое; 2 — частное
* photo_100 - string URL главной фотографии с размером 100х100px
* photo_200 - string URL главной фотографии в максимальном размере
* photo_50 - string URL главной фотографии с размером 50x50px
* site - string Адрес сайта, указанный в профиле.
* start_date - Сообщества-встречи содержат время начала встречи в формате unixtime. Для публичных страниц содержит только start_date — дата основания в формате YYYYMMDD
* status - string Статус сообщества
### Лицензия
Лицензия на этот набор данных общедоступная, вы можете использовать его в своих научных исследованиях, проектных работах и других работах. Единственное условие — публикация ссылки на этот набор данных. | [
-0.6372873187065125,
-0.5904014110565186,
0.5325539708137512,
0.31640464067459106,
-0.5487503409385681,
0.00009015123214339837,
0.12627971172332764,
-0.22092828154563904,
0.5875826478004456,
0.4147990643978119,
-0.6452924013137817,
-1.183314323425293,
-0.4361811578273773,
0.250395774841308... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
indiejoseph/yue-zh-translation | indiejoseph | 2023-10-08T20:52:38Z | 18 | 1 | null | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:yue",
"language:zh",
"license:cc-by-4.0",
"region:us"
] | 2023-10-08T20:52:38Z | 2023-08-28T10:19:35.000Z | 2023-08-28T10:19:35 | ---
language:
- yue
- zh
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- translation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: translation
struct:
- name: yue
dtype: string
- name: zh
dtype: string
splits:
- name: train
num_bytes: 16446012
num_examples: 169949
- name: test
num_bytes: 4107525
num_examples: 42361
download_size: 15755469
dataset_size: 20553537
---
This dataset is comprised of:
1. Crawled content that is machine translated from Cantonese to Simplified Chinese.
2. machine translated articlse from zh-yue.wikipedia.org
3. [botisan-ai/cantonese-mandarin-translations](https://huggingface.co/datasets/botisan-ai/cantonese-mandarin-translations)
4. [AlienKevin/LIHKG](https://huggingface.co/datasets/AlienKevin/LIHKG)
| [
-0.13139574229717255,
-0.4328157603740692,
0.22965893149375916,
0.3083939254283905,
-0.15843749046325684,
-0.17242431640625,
-0.12174032628536224,
-0.34310024976730347,
0.5754952430725098,
0.7905135750770569,
-0.7579078078269958,
-0.7058050632476807,
-0.4638500213623047,
0.4650631844997406... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ift/handwriting_forms | ift | 2023-09-06T14:13:04Z | 18 | 2 | null | [
"task_categories:feature-extraction",
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"region:us"
] | 2023-09-06T14:13:04Z | 2023-09-06T06:18:43.000Z | 2023-09-06T06:18:43 | ---
license: openrail
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 14177871.8
num_examples: 1400
- name: validation
num_bytes: 2021857
num_examples: 199
- name: test
num_bytes: 5084688
num_examples: 500
download_size: 20674979
dataset_size: 21284416.8
task_categories:
- feature-extraction
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Sohrab Redjai Sani @srsani | [
-0.522269606590271,
-0.41775181889533997,
-0.08525875955820084,
0.3940451741218567,
-0.43859586119651794,
0.04296013340353966,
-0.25492826104164124,
-0.30252811312675476,
0.6896114945411682,
0.5672594904899597,
-0.9326232075691223,
-1.1498960256576538,
-0.7486655712127686,
0.01812314800918... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TRI-ML/gsm8k-hard | TRI-ML | 2023-09-07T18:45:08Z | 18 | 0 | null | [
"region:us"
] | 2023-09-07T18:45:08Z | 2023-09-07T18:44:07.000Z | 2023-09-07T18:44:07 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bupt/LawDataset-BUPT | bupt | 2023-11-11T13:23:04Z | 18 | 11 | null | [
"size_categories:1M<n<10M",
"language:zh",
"legal",
"region:us"
] | 2023-11-11T13:23:04Z | 2023-09-12T05:55:37.000Z | 2023-09-12T05:55:37 | ---
language:
- zh
tags:
- legal
pretty_name: LawDataset-BPUT
size_categories:
- 1M<n<10M
---
## LawDataset-BUPT ⚖️
Here is the full data from the Legal LLM project, from which we hope to build a high quality dataset.
Here's our [github project page](https://github.com/KLGR123/LegalLLM-BUPT).
If you want to make any contribution, please contact me QQ 2248157602.
### Data Source
Our data mainly comes from
- CrimeKgAssistant, 856 crime KG items / 2800k crime name_entities / 200k lawQA with 13 classes
- Tigerbot-law-plugin 55k laws provision data with 11 classes
- Wenshu_ms_dataset 45k law judgements data
- Lexilaw
- LawGPT-zh 52k QA data
- Lawyer_LLAMA law exam and instruction data
- hualv_webste_QA 20k law QA data
- baidu_zhidao_law_QA 36k law QA data
- BELLE general dataset 1.5M
For BELLE dataset and models, please download directly at [BELLE huggingface page](https://huggingface.co/datasets/BELLE-2/train_3.5M_CN_With_Category).
### Data Statistics
So far the dataset size is around
- Law QA data size: ~310k
- Law provision data size: ~55k
- Law judgement data size: ~45k
- General data size: ~1500k
### Data Fields
You can check the different data field for each source data.
Wenshu_ms_dataset 45k law judgements data
```
{
"Case": "王某甲与辽宁古田房地产有限公司房屋拆迁安置补偿合同纠纷一审民事判决书",
"CaseId": "7abb676880254ca79c34a90e0101bc8e",
"CaseProc": "民事一审",
"CaseRecord": "原告王某甲与被告辽宁古田房地产有限公司房屋拆迁安置补偿合同纠纷一案,本院于2018年4月26日受理后,依法由审判员雷凯独任审判,公开开庭进行了审理。原告王某甲与被告辽宁古田房地产有限公司的委托代理人李某、刘某某到庭参加诉讼。本案现已审理终结",
"CaseType": "民事案件",
"JudgeAccusation": "原告王某甲诉称:原告原住大东区XX,2009年动迁至2014年回迁,至今被告没给原告房屋补助款。原告多次向被告主张房屋补助款,被告总是说没钱等等再等等。后来被告用这笔款给原告折抵五年物业费(从2015.1.1至2019.12.31),剩余房屋补助费3万多,到现在一直没解决,故起诉至法院。请求法院判令1、被告给付原告房屋拆迁款48000元;2、起诉费由被告承担。\n被告辽宁古田房地产有限公司辩称:针对原告诉讼请求48000元,被告对此不予认可,原、被告双方于2016年9月21日签订了协议书一份,对双方拆迁安置补助费的具体数额进行了重新确认,顶5年物业费后,尚欠安置费33828元。现原告诉讼请求48000元无法律依据,应按双方签订的协议书继续履行,该协议书系双方真实意思表示,具有法律效力。\n经审理查明:2008年7月25日,原被告签订城市房屋拆迁补偿安置协议。2016年9月21日,原告与被告签订协议书,该协议约定逾期安置补助费为48000元,原被告双方同意按百分之八十即38400元进行全部抵顶。其中4572元抵顶原告房屋五年的物业费(从2015年1月1日至2019年12月31日期间),剩余33828元待被告资金充足时解决。原告在庭审中自述从2015年至今没有缴纳过物业费。\n上述事实,有城市房屋拆迁补偿安置协议、协议书等证据及原被告陈述,经开庭质证,本院予以确认,在卷佐证",
"JudgeReason": "本院认为:2016年9月21日,原告与被告签订协议书系双方真实的意思表示,内容不违反法律规定,合法有效,双方均应遵守。在该协议中,原被告协商一致在抵顶五年的物业费后,被告尚欠原告逾期安置补助费33828元,被告至今没有给付原告,故被告应当给付原告逾期安置补助费33828元。\n综上所述,根据《中华人民共和国合同法》第四十四条之规定,判决如下",
"JudgeResult": "一、被告辽宁古田房地产有限公司于本判决生效后十日内给付原告王某甲逾期安置补助费33828元;\n二、驳回原告王某甲的其他诉讼请求。\n如被告未按本判决所指定的期限履行给付义务,则应当依照《中华人民共和国民事诉讼法》第二百五十三条之规定,加倍支付迟延履行期间的债务利息。\n案件受理费1000元,减半收取500元,由原告王某甲负担177元,由被告辽宁古田房地产有限公司负担323元。\n如不服本判决,可在判决书送达之日起15日内向本院递交上诉状,并按对方当事人的人数提出副本,交纳上诉案件受理费,上诉于辽宁省沈阳市中级人民法院。如上诉期满后7日内未交纳上诉案件受理费,按自动撤回上诉处理",
"Keywords": [
"给付"
],
"Parties": [
{
"NameText": "王某甲",
"Name": "王某甲",
"LegalEntity": "Person",
"Prop": "原告"
},
{
"NameText": "辽宁古田房地产有限公司",
"Name": "辽宁古田房地产有限公司",
"LegalEntity": "Corporation",
"Prop": "被告"
}
],
"Category": {
"cat_1": "房地产纠纷",
"cat_2": "房产纠纷"
}
}
```
Tigerbot-law-plugin 55k laws provision data with 11 classes
```
{"type": "宪法", "title": "中华人民共和国宪法", "chapter1": "第一章 总纲", "content": "第六条 中华人民共和国的社会主义经济制度的基础是生产资料的社会主义公有制,即全民所有制和劳动群众集体所有制。社会主义公有制消灭人剥削人的制度,实行各尽所能、按劳分配的原则。\n国家在社会主义初级阶段,坚持公有制为主体、多种所有制经济共同发展的基本经济制度,坚持按劳分配为主体、多种分配方式并存的分配制度。", "chapter2": "", "chapter3": ""}
```
baidu_zhidao_law_QA 36k law QA data
```
title,question,reply,is_best
在法律中定金与订金的区别订金和定金哪个受,,“定金”是指当事人约定由一方向对方给付的,作为债权担保的一定数额的货币,它属于一种法律上的担保方式,目的在于促使债务人履行债务,保障债权人的债权得以实现。签合同时,对定金必需以书面形式进行约定,同时还应约定定金的数额和交付期限。给付定金一方如果不履行债务,无权要求另一方返还定金;接受定金的一方如果不履行债务,需向另一方双倍返还债务。债务人履行债务后,依照约定,定金应抵作价款或者收回。而“订金”目前我国法律没有明确规定,它不具备定金所具有的担保性质,可视为“预付款”,当合同不能履行时,除不可抗力外,应根据双方当事人的过错承担违约责任。,1
```
CrimeKgAssistant
```
{
"completion": "根据相关法律规定,未满14岁的人不得驾驶机动车辆,骑行电动车也属于机动车范畴。因此,未成年捏了电动车的刹车,可以认定为违法行为。同时,成年骑电动车的人也应当承担相应的民事责任。",
"prompt": "一14岁未成年骑自行车与一成年骑电动车相撞,T形路,未成年拐弯,未成年捏了刹车的电动车属于机动车吗??"
}
```
JEC-QA
```
{"answer": ["D"], "id": "3_2613", "option_list": {"A": "因未办理收养登记,包某与陈煜之间不存在法律上父子关系", "B": "陈煜作为包某生前抚养且无经济来源的人,可适当分得包某遗产", "C": "陈某的遗产由洪某与陈婴继承,陈煜不能继承", "D": "陈煜既可以继承陈某的遗产,也可以继承包某的遗产"}, "statement": "陈某与潘某离婚后,潘某带着2岁的儿子陈煜改嫁包某。陈某、潘某、包某三人订立收养协议,陈煜由包某收养,今后一切与陈某概无关系,但未办理收养登记。5年后,潘某与包某生下一女,取名包红。陈某离婚后,与洪某结婚,生女取名陈婴。几年后,陈某、包某相继去世。下列说法中正确的是:", "type": "1"}
```
| [
-0.5466171503067017,
-0.5842440724372864,
0.4881974756717682,
0.15627527236938477,
-0.6924349665641785,
-0.44761979579925537,
-0.002432212233543396,
-0.26133662462234497,
0.5543028712272644,
0.6319356560707092,
-0.22222711145877838,
-0.7639148831367493,
-0.4230065941810608,
-0.012251881882... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lchakkei/OpenOrca-Traditional-Chinese | lchakkei | 2023-10-11T08:29:08Z | 18 | 4 | null | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | 2023-10-11T08:29:08Z | 2023-09-16T03:15:44.000Z | 2023-09-16T03:15:44 | ---
language:
- zh
license: mit
size_categories:
- 10M<n<100M
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: OpenOrca-Chinese
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 6477736021
num_examples: 4233915
download_size: 4104476393
dataset_size: 6477736021
---
<p><h1>🐋 OpenOrca-Chinese 数据集!🐋</h1></p>
感謝 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 資料集的發布,為廣大NLP研究人員和開發者帶來了寶貴的資源!
這是一個對 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 資料集中文翻譯的版本,翻譯引擎為 Google 翻譯,希望能為中文 LLM 研究做出一點點貢獻。
<br/>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
| [
-0.6114146113395691,
-0.8206450939178467,
0.07536734640598297,
0.12174840271472931,
-0.11443780362606049,
-0.3044077157974243,
-0.2527678906917572,
-0.7688285708427429,
0.5113933682441711,
0.6329591274261475,
-0.4071667194366455,
-0.6581847667694092,
-0.34102824330329895,
0.191758081316947... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
p208p2002/wudao | p208p2002 | 2023-11-02T09:06:54Z | 18 | 1 | null | [
"task_categories:text-generation",
"size_categories:n>1T",
"language:zh",
"region:us"
] | 2023-11-02T09:06:54Z | 2023-09-19T01:35:45.000Z | 2023-09-19T01:35:45 | ---
language:
- zh
task_categories:
- text-generation
size_categories:
- n>1T
---
# 悟道(WuDao)資料集
非原製作者,僅搬移。
此資料集下載約60GB,解壓縮後約220GB。
### 原始連結
[Science Data Bank](https://www.scidb.cn/en/detail?dataSetId=c6a3fe684227415a9db8e21bac4a15ab)
## 使用
```bash
sudo apt install unrar
pip install patool wget opencc
```
```python
from datasets import load_dataset
# 簡中
load_dataset("p208p2002/wudao",streaming=True,split="zhs")
# 繁中 (使用opencc轉換)
load_dataset("p208p2002/wudao",streaming=True,split="zht")
```
## 清除資料
當下載失敗的時候請手動清除資料
```bash
rm -rf ~/.cache/wudao_dataset
```
## 資料類別統計
```json
{
"_total": 59100001,
"豆瓣话题": 209027,
"科技": 1278068,
"经济": 1096215,
"汽车": 1368193,
"娱乐": 1581947,
"农业": 1129758,
"军事": 420949,
"社会": 446228,
"游戏": 754703,
"教育": 1133453,
"体育": 660858,
"旅行": 821573,
"国际": 630386,
"房产": 387786,
"文化": 710648,
"法律": 36585,
"股票": 1205,
"博客": 15467790,
"日报": 16971,
"评论": 13867,
"孕育常识": 48291,
"健康": 15291,
"财经": 54656,
"医学问答": 314771,
"资讯": 1066180,
"科普文章": 60581,
"百科": 27273280,
"酒业": 287,
"经验": 609195,
"新闻": 846810,
"小红书攻略": 185379,
"生活": 23,
"网页文本": 115830,
"观点": 1268,
"海外": 4,
"户外": 5,
"美容": 7,
"理论": 247,
"天气": 540,
"文旅": 2999,
"信托": 62,
"保险": 70,
"水利资讯": 17,
"时尚": 1123,
"亲子": 39,
"百家号文章": 335591,
"黄金": 216,
"党建": 1,
"期货": 330,
"快讯": 41,
"国内": 15,
"国学": 614,
"公益": 15,
"能源": 7,
"创新": 6
}
```
## Cite
```
@misc{ c6a3fe684227415a9db8e21bac4a15ab,
author = {Zhao Xue and Hanyu Zhao and Sha Yuan and Yequan Wang},
title = {{WuDaoCorpora Text}},
year = 2022,
month = dec,
publisher = {Science Data Bank},
version = {V1},
doi = {10.57760/sciencedb.o00126.00004},
url = https://doi.org/10.57760/sciencedb.o00126.00004
}
``` | [
-0.5854843854904175,
-0.37134450674057007,
0.22525516152381897,
0.17596378922462463,
-0.5120620727539062,
0.024582931771874428,
-0.097071073949337,
-0.23069073259830475,
0.6121622920036316,
0.31174853444099426,
-0.45383623242378235,
-0.5518377423286438,
-0.548346757888794,
0.17362923920154... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yentinglin/ntu_adl_recitation | yentinglin | 2023-09-21T02:18:47Z | 18 | 0 | null | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-09-21T02:18:47Z | 2023-09-21T00:57:42.000Z | 2023-09-21T00:57:42 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/xpersona_id | SEACrowd | 2023-09-26T12:34:30Z | 18 | 0 | null | [
"language:ind",
"dialogue-system",
"region:us"
] | 2023-09-26T12:34:30Z | 2023-09-26T11:42:21.000Z | 2023-09-26T11:42:21 | ---
tags:
- dialogue-system
language:
- ind
---
# xpersona_id
XPersona is a multi-lingual extension of Persona-Chat.
XPersona dataset includes persona conversations in six different languages other than English for building and evaluating multilingual personalized agents.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{lin2020xpersona,
title={XPersona: Evaluating multilingual personalized chatbot},
author={Lin, Zhaojiang and Liu, Zihan and Winata, Genta Indra and Cahyawijaya, Samuel and Madotto, Andrea and Bang, Yejin and Ishii, Etsuko and Fung, Pascale},
journal={arXiv preprint arXiv:2003.07568},
year={2020}
}
@inproceedings{cahyawijaya-etal-2021-indonlg,
title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
author = "Cahyawijaya, Samuel and
Winata, Genta Indra and
Wilie, Bryan and
Vincentio, Karissa and
Li, Xiaohong and
Kuncoro, Adhiguna and
Ruder, Sebastian and
Lim, Zhi Yuan and
Bahar, Syafri and
Khodra, Masayu and
Purwarianti, Ayu and
Fung, Pascale",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.699",
doi = "10.18653/v1/2021.emnlp-main.699",
pages = "8875--8898"
}
```
## License
CC-BY-SA 4.0
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.29211023449897766,
-0.6118221282958984,
0.3543208837509155,
0.6276594400405884,
-0.13619168102741241,
0.18299061059951782,
-0.39918720722198486,
-0.5383438467979431,
0.6012212634086609,
0.5100783109664917,
-0.5350491404533386,
-0.8068939447402954,
-0.3562273383140564,
0.3283317387104034... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Shiveswarran/llm_code_description_v5 | Shiveswarran | 2023-09-29T08:55:05Z | 18 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-09-29T08:55:05Z | 2023-09-28T14:39:03.000Z | 2023-09-28T14:39:03 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hannxu/hc_var | hannxu | 2023-10-03T16:33:15Z | 18 | 2 | null | [
"task_categories:text-classification",
"size_categories:100M<n<1B",
"language:en",
"license:apache-2.0",
"arxiv:2310.01307",
"region:us"
] | 2023-10-03T16:33:15Z | 2023-10-02T15:24:06.000Z | 2023-10-02T15:24:06 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 100M<n<1B
---
# Dataset Card for HC-Var (Human and ChatGPT Texts with Variety)
This is a collection of human texts and ChatGPT (GPT3.5-Turbo) generated texts, to faciliate studies such as generated texts detection.
It includes the texts which are generated / human written to accomplish various language tasks with various approaches.
The included language tasks and topics are summarized below. Note: For each language task, this dataset considers 3 different prompts to inquire ChatGPT outputs.
The example code to train binary classification models is in [this website](https://github.com/hannxu123/hc_var).
A technical report on some representative detection methods can be find in [this paper](https://arxiv.org/abs/2310.01307).
This dataset is collected by Han Xu from Michigan State
University. Potential issues and suggestions are welcomed to be dicussed in the community panel or emails to xuhan1@msu.edu.
## Key variables in the dataset:
**text**: The text body (including either human or ChatGPT texts.)\
**domain**: The language tasks included in this dataset: News, Review, (Essay) Writing, QA\
**topic**: The topic in each task.\
**prompt**: The prompt used to obtain ChatGPT outputs. "N/A" for human texts.\
**pp_id**: Each task has 3 prompts to inquire ChatGPT outputs. The "pp_id" denotes the index of prompt. "0" for human texts. "1-3" for ChatGPT texts.\
**label**: "0" for human texts. "1" for ChatGPT texts.
## To cite this dataset
```
@misc{xu2023generalization,
title={On the Generalization of Training-based ChatGPT Detection Methods},
author={Han Xu and Jie Ren and Pengfei He and Shenglai Zeng and Yingqian Cui and Amy Liu and Hui Liu and Jiliang Tang},
year={2023},
eprint={2310.01307},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.2054184079170227,
-0.6799004077911377,
0.12708261609077454,
0.03765888884663582,
-0.24607466161251068,
0.10443798452615738,
-0.2939067780971527,
-0.2811393737792969,
-0.04169590026140213,
0.7003027200698853,
-0.6755238175392151,
-0.8560881018638611,
-0.48700687289237976,
0.1022322177886... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lumos23/alpaca_farm | Lumos23 | 2023-10-09T19:22:49Z | 18 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-10-09T19:22:49Z | 2023-10-02T23:26:33.000Z | 2023-10-02T23:26:33 | ---
license: cc-by-nc-4.0
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TIGER-Lab/MetricInstruct | TIGER-Lab | 2023-10-22T15:04:12Z | 18 | 6 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"language:cs",
"language:ru",
"language:fr",
"license:mit",
"arxiv:2310.00752",
"region:us"
] | 2023-10-22T15:04:12Z | 2023-10-04T03:05:36.000Z | 2023-10-04T03:05:36 | ---
configs:
- config_name: train
data_files:
- split: train_real_world
path:
- data/new_real_world_.json
- split: train_synthetic
path:
- data/new_synthetic_.json
- split: train_mix
path:
- data/new_mix_.json
license: mit
task_categories:
- text-generation
language:
- en
- zh
- cs
- ru
- fr
size_categories:
- 10K<n<100K
---
## MetricInstruct
We present TIGERScore, a **T**rained metric that follows **I**nstruction **G**uidance to perform **E**xplainable, and **R**eference-free evaluation over a wide spectrum of text generation tasks. TIGERScore is guided by the natural language instruction to provide error analysis to pinpoint the mistakes in the generated text. Our metric is based on LLaMA-2, trained on our meticulously curated instruction-tuning dataset MetricInstruct which covers 6 text generation tasks and 23 text generation datasets. The dataset consists of 48K quadruple in the form of (instruction, input, system output, error analysis). We collected the `system outputs' through diverse channels to cover different types of errors.
Project Page: [https://tiger-ai-lab.github.io/TIGERScore/](https://tiger-ai-lab.github.io/TIGERScore/)
Paper: [https://arxiv.org/abs/2310.00752](https://arxiv.org/abs/2310.00752)
Code: [https://github.com/TIGER-AI-Lab/TIGERScore](https://github.com/TIGER-AI-Lab/TIGERScore)
Demo: [https://huggingface.co/spaces/TIGER-Lab/TIGERScore](https://huggingface.co/spaces/TIGER-Lab/TIGERScore)
TIGERScore-7B-V1.0: [https://huggingface.co/TIGER-Lab/TIGERScore-7B-V1.0](https://huggingface.co/TIGER-Lab/TIGERScore-7B-V1.0)
TIGERScore-13B-V1.0: [https://huggingface.co/TIGER-Lab/TIGERScore-13B-V1.0](https://huggingface.co/TIGER-Lab/TIGERScore-13B-V1.0)
We present the MetricInstruct dataset, which is employed to fine-tune TIGERScore. The three underlying criteria for dataset construction are:
1. Dataset diversity: we choose 23 distinctive datasets as the source context to cover enough generation tasks.
2. Error coverage: we take system outputs generated from 50+ text generation systems to cover all types of errors and guarantee a balanced distribution.
3. Quality ensurance: to ensure MetricInstruct is tailored to gather in-depth error analysis, we sourced it by prompting OpenAI GPT models and then filtered through different heuristics to eliminate low-quality error analysis.
## Data Source
Our system outputs come from two channels, namely real-world system outputs and synthetic outputs. The real-world system outputs are obtained from real systems, which ensures the error distribution is aligned with real-world ones.
Check out our paper for more details.
| Task | Real-World Dataset | Output Source | Synthetic Dataset | Output Source |
|:--------:|:-----------------------------------------:|:--------------:|:-----------------------------------:|:--------------:|
| Summarization | SummEval, XSum,Newsroom,SAMSum | 27 Systems | CNN/DM, XSum,Gigaword,SAMSum | GPT-4 |
| Translation | WMT | 18 Systems | WMT | GPT-4 |
| Data-to-Text | WebNLG-2020,WikiTableText,ToTTo | 17 Systems | WikiTableText,Dart,ToTTo | GPT-4 |
| Long-Form QA | ASQA,FeTaQA,CosmosQA,ELI5 | 5 Systems | ASQA,FeTaQACosmos QA,ELI5 | GPT-4 |
| MathQA | GSM8K | 5 Systems | GSM8K,MathQA | GPT-4 |
| Instruct | MixInstruct | 11 Systems | LIMA,AlpacaFarmOASST1,Guanaco,Dolly | GPT-4 |
## Data Format
The dataset consists of 48K quadruple in the form of (instruction, input, system output, error analysis).
For each item in the dataset, `task` represents its corresponding text generation task, `instruction` is its task instruction, `input_context` is its input source, and `hypo_output` is the generated output, and `errors` is the error analysis given by ChatGPT or GPT-4.
## Formatting
To format the data fields into a single prompt for finetuning or testing, We provide the following code for users to refer:
```python
FINETUNE_INST = "You are evaluating errors in a model-generated output for a(an) ${task} task."
FINETUNE_INPUT = """\
Task instruction: ${generation_instruction}
Source: ${input_context}
Model-generated Output: ${hypothesis_output}
Based on the given task instruction and source, identify errors in this model-generated output.
For each error you give in the response, please also elaborate the following information:
- error location (the words that are wrong in the output)
- error aspect it belongs to.
- explanation why it's an error, and the correction suggestions.
- severity of the error ("Major" or "Minor").
- reduction of score (between 0.5 and 5 given the severity of the error)
Your evaluation output:
"""
inst_part = Template(FINETUNE_INST)
inst_part = inst_part.substitute(task=task)
input_part = Template(FINETUNE_INPUT)
input_part = input_part.substitute(
generation_instruction=instruction,
input_context=input_context,
hypothesis_output=hypo_output
)
prompt = (inst_part + "\n" + input_part).strip("\n ") + "\n"
encodings = tigerscore_tokenizer(prompt, return_tensors="pt")
input_ids = encodings["input_ids"].to(tigerscore_model.device)
attention_mask = encodings["attention_mask"].to(tigerscore_model.device)
```
Example of formatted prompt:
```txt
You are evaluating errors in a model-generated output for a(an) translation task.
Task instruction: Translate the following text from German to English.
Source: Der künftige EM-Cheforganisator Philipp Lahm soll laut Grindel im DFB-Präsidium mitarbeiten.
Model-generated Output: According to Grindel, the future head of the European Championships, Philipp Lahm, is to participate in the DFB Presidency.
Based on the given task instruction and source, identify errors in this model-generated output.
For each error you give in the response, please also elaborate the following information:
- error location (the words that are wrong in the output)
- error aspect it belongs to.
- explanation why it's an error, and the correction suggestions.
- severity of the error ("Major" or "Minor").
- reduction of score (between 0.5 and 5 given the severity of the error)
Your evaluation output:
```
## Citation
```
@article{jiang2023TIGERScore,
title={TIGERScore: Towards Building Explainable Metric for All Text Generation Tasks},
author={Dongfu Jiang, Yishan Li, Ge Zhang, Wenhao Huang, Bill Yuchen Lin, Wenhu Chen},
journal={arXiv preprint arXiv:2310.00752},
year={2023}
}
``` | [
-0.284744530916214,
-0.7351940870285034,
0.3436303734779358,
0.2861602008342743,
-0.09446004778146744,
-0.09682148694992065,
-0.2651635706424713,
-0.2889423072338104,
-0.13138173520565033,
0.32802602648735046,
-0.728661298751831,
-0.7508076429367065,
-0.4870905876159668,
0.2095911651849746... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DeepPavlov/verbalist_prompts | DeepPavlov | 2023-10-21T20:14:45Z | 18 | 1 | null | [
"language:ru",
"language:en",
"arxiv:2305.11206",
"region:us"
] | 2023-10-21T20:14:45Z | 2023-10-04T12:23:47.000Z | 2023-10-04T12:23:47 | ---
configs:
- config_name: default
data_files:
- split: dim_oasst_en
path: data/dim_oasst_en-*
- split: dim_oasst_ru
path: data/dim_oasst_ru-*
- split: dim_lima
path: data/dim_lima-*
- split: dim_logic_tasks_ru
path: data/dim_logic_tasks_ru-*
- split: dim_wikihow_en
path: data/dim_wikihow_en-*
- split: dim_wikihow_ru
path: data/dim_wikihow_ru-*
- split: dim_essayforum_writing_prompts_6k
path: data/dim_essayforum_writing_prompts_6k-*
- split: dim_sharegpt_short_ru
path: data/dim_sharegpt_short_ru-*
- split: dim_openreview_prompts_65
path: data/dim_openreview_prompts_65-*
- split: dim_roleplay_instruct_v2_final
path: data/dim_roleplay_instruct_v2_final-*
- split: dim_kinomania_scripts
path: data/dim_kinomania_scripts-*
- split: dim_bugurt_thread_prompts
path: data/dim_bugurt_thread_prompts-*
- split: dim_russian_lyrics_prompts
path: data/dim_russian_lyrics_prompts-*
- split: dim_ru_instruct_gpt4
path: data/dim_ru_instruct_gpt4-*
- split: dim_gpt_roleplay_realm
path: data/dim_gpt_roleplay_realm-*
- split: dim_ultrachat_ru
path: data/dim_ultrachat_ru-*
- split: dim_scitldr
path: data/dim_scitldr-*
- split: dim_linux_man_pages_tldr_summarized
path: data/dim_linux_man_pages_tldr_summarized-*
- split: dim_dolphin_ru_3k
path: data/dim_dolphin_ru_3k-*
- split: dim_runne_prompts
path: data/dim_runne_prompts-*
- split: dim_lurk_prompts
path: data/dim_lurk_prompts-*
- split: dim_panorama_prompts_10k
path: data/dim_panorama_prompts_10k-*
- split: dim_resh_edu_short_prompts
path: data/dim_resh_edu_short_prompts-*
- split: dim_databricks_dolly_15k_ru
path: data/dim_databricks_dolly_15k_ru-*
- split: dim_databricks_dolly_15k_en
path: data/dim_databricks_dolly_15k_en-*
- split: dim_grammarly_coedit
path: data/dim_grammarly_coedit-*
- split: dim_kinopoisk_prompts
path: data/dim_kinopoisk_prompts-*
- split: dim_medical_qa_ru_prompts
path: data/dim_medical_qa_ru_prompts-*
- split: dim_joke_explaination_prompts
path: data/dim_joke_explaination_prompts-*
- split: dim_oa_stackexchange_200k
path: data/dim_oa_stackexchange_200k-*
- split: dim_scale_helpful_no_math
path: data/dim_scale_helpful_no_math-*
- split: dim_law_stackexchange_prompts
path: data/dim_law_stackexchange_prompts-*
- split: dim_ficbook_prompts_best_10k
path: data/dim_ficbook_prompts_best_10k-*
- split: dim_azbyka_logic_ru
path: data/dim_azbyka_logic_ru-*
- split: dim_povarenok
path: data/dim_povarenok-*
- split: dim_AO3_fandom_chatbot_1to1
path: data/dim_AO3_fandom_chatbot_1to1-*
- split: dim_habr_prompts_5k
path: data/dim_habr_prompts_5k-*
- split: dim_what_where_when_50k
path: data/dim_what_where_when_50k-*
- split: dim_competition_math
path: data/dim_competition_math-*
- split: dim_sharegpt_short_en_30k
path: data/dim_sharegpt_short_en_30k-*
- split: dim_ru_turbo_alpaca_evol_instruct
path: data/dim_ru_turbo_alpaca_evol_instruct-*
- split: dim_ru_turbo_saiga
path: data/dim_ru_turbo_saiga-*
- split: dim_bugurt_completion_prompts
path: data/dim_bugurt_completion_prompts-*
- split: dim_tldr_17_50k
path: data/dim_tldr_17_50k-*
- split: dim_grade_school_math_instructions
path: data/dim_grade_school_math_instructions-*
- split: dim_tldr_news
path: data/dim_tldr_news-*
- split: dim_grade_school_math_instructions_ru
path: data/dim_grade_school_math_instructions_ru-*
- split: dim_dialogsum
path: data/dim_dialogsum-*
- split: dim_HC3_ru
path: data/dim_HC3_ru-*
- split: dim_horoscopes_ru_10k
path: data/dim_horoscopes_ru_10k-*
- split: dim_yandex_q_200k
path: data/dim_yandex_q_200k-*
- split: dim_leetcodesolutions_en_2k
path: data/dim_leetcodesolutions_en_2k-*
- split: dim_forum_uristov_rf_prompts
path: data/dim_forum_uristov_rf_prompts-*
- split: dim_dialogsum_ru
path: data/dim_dialogsum_ru-*
- split: dim_huggingartists_prompts
path: data/dim_huggingartists_prompts-*
dataset_info:
features:
- name: conversation_text
sequence: string
splits:
- name: dim_oasst_en
num_bytes: 4335500
num_examples: 2289
- name: dim_oasst_ru
num_bytes: 6206378
num_examples: 2220
- name: dim_lima
num_bytes: 2892267
num_examples: 1030
- name: dim_logic_tasks_ru
num_bytes: 76915
num_examples: 86
- name: dim_wikihow_en
num_bytes: 16008199
num_examples: 1995
- name: dim_wikihow_ru
num_bytes: 24451573
num_examples: 2058
- name: dim_essayforum_writing_prompts_6k
num_bytes: 22326330
num_examples: 6361
- name: dim_sharegpt_short_ru
num_bytes: 808319
num_examples: 253
- name: dim_openreview_prompts_65
num_bytes: 6739952
num_examples: 150
- name: dim_roleplay_instruct_v2_final
num_bytes: 4389286
num_examples: 7188
- name: dim_kinomania_scripts
num_bytes: 238731
num_examples: 27
- name: dim_bugurt_thread_prompts
num_bytes: 302191
num_examples: 223
- name: dim_russian_lyrics_prompts
num_bytes: 18676
num_examples: 43
- name: dim_ru_instruct_gpt4
num_bytes: 18351658
num_examples: 14222
- name: dim_gpt_roleplay_realm
num_bytes: 20163429
num_examples: 8700
- name: dim_ultrachat_ru
num_bytes: 4495105
num_examples: 500
- name: dim_scitldr
num_bytes: 4049209
num_examples: 3229
- name: dim_linux_man_pages_tldr_summarized
num_bytes: 3006631
num_examples: 481
- name: dim_dolphin_ru_3k
num_bytes: 7976776
num_examples: 3000
- name: dim_runne_prompts
num_bytes: 2686148
num_examples: 537
- name: dim_lurk_prompts
num_bytes: 92012533
num_examples: 5671
- name: dim_panorama_prompts_10k
num_bytes: 28964132
num_examples: 11024
- name: dim_resh_edu_short_prompts
num_bytes: 12380000
num_examples: 2106
- name: dim_databricks_dolly_15k_ru
num_bytes: 21900617
num_examples: 14914
- name: dim_databricks_dolly_15k_en
num_bytes: 11973713
num_examples: 15011
- name: dim_grammarly_coedit
num_bytes: 18500223
num_examples: 82466
- name: dim_kinopoisk_prompts
num_bytes: 136323982
num_examples: 36591
- name: dim_medical_qa_ru_prompts
num_bytes: 75634717
num_examples: 80101
- name: dim_joke_explaination_prompts
num_bytes: 196224
num_examples: 364
- name: dim_oa_stackexchange_200k
num_bytes: 192535277
num_examples: 200000
- name: dim_scale_helpful_no_math
num_bytes: 85610911
num_examples: 17095
- name: dim_law_stackexchange_prompts
num_bytes: 64544963
num_examples: 24343
- name: dim_ficbook_prompts_best_10k
num_bytes: 75867114
num_examples: 10000
- name: dim_azbyka_logic_ru
num_bytes: 173101
num_examples: 480
- name: dim_povarenok
num_bytes: 93518909
num_examples: 46500
- name: dim_AO3_fandom_chatbot_1to1
num_bytes: 1162058
num_examples: 614
- name: dim_habr_prompts_5k
num_bytes: 40224997
num_examples: 5000
- name: dim_what_where_when_50k
num_bytes: 38385243
num_examples: 50000
- name: dim_competition_math
num_bytes: 5808689
num_examples: 7500
- name: dim_sharegpt_short_en_30k
num_bytes: 86599862
num_examples: 29597
- name: dim_ru_turbo_alpaca_evol_instruct
num_bytes: 105340901
num_examples: 47793
- name: dim_ru_turbo_saiga
num_bytes: 79875722
num_examples: 37699
- name: dim_bugurt_completion_prompts
num_bytes: 5471066
num_examples: 5000
- name: dim_tldr_17_50k
num_bytes: 81185070
num_examples: 50000
- name: dim_grade_school_math_instructions
num_bytes: 4655452
num_examples: 8792
- name: dim_tldr_news
num_bytes: 4014718
num_examples: 7138
- name: dim_grade_school_math_instructions_ru
num_bytes: 6845510
num_examples: 7473
- name: dim_dialogsum
num_bytes: 11176807
num_examples: 12460
- name: dim_HC3_ru
num_bytes: 43395731
num_examples: 24322
- name: dim_horoscopes_ru_10k
num_bytes: 9489348
num_examples: 10000
- name: dim_yandex_q_200k
num_bytes: 292443135
num_examples: 200000
- name: dim_leetcodesolutions_en_2k
num_bytes: 4708692
num_examples: 2048
- name: dim_forum_uristov_rf_prompts
num_bytes: 2757263
num_examples: 1849
- name: dim_dialogsum_ru
num_bytes: 18657989
num_examples: 12460
- name: dim_huggingartists_prompts
num_bytes: 121909835
num_examples: 64006
download_size: 0
dataset_size: 2023767777
language:
- ru
- en
---
# Verbalist (буквоед) - русскоязычный ассистент.
Проект во многом вдохновленный [Saiga](https://huggingface.co/IlyaGusev/saiga2_7b_lora).
Мною были собраны все самые качественные датасеты с [huggingface.datasets](https://huggingface.co/datasets), а также собраны дополнительно с тех сайтов, которые я посчитал весьма полезными для создания аналога ChatGPT. Лицензии у всех датасетов отличаются, какие-то по типу [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) были созданы специально для обучения подобных моделей, какие-то являются прямой выгрузкой диалогов с ChatGPT ([RyokoAI/ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K)).
Вклад данного репозитория состоит в систематизации и стандартизации уже имеющихся датасетов, добавлении новых. А также тренировке моделей на этих данных.
- [google sheets таблица с датасетами и описанием](https://docs.google.com/spreadsheets/d/10xcsINF_c_zUZchT8p-8xIuHDgcuwg63jjl2ortBP9I/edit?usp=sharing)
### Датасеты
- **[Объединенный датасет где все данные уже подготовлены для тренировки диалоговой модели](https://huggingface.co/datasets/dim/verbalist_prompts)**
|name |link |description |original_name |original_source |preparation_script |language|amount_examples|mean_llama_tokens|std |min_llama_tokens|25% |50% |75% |max_llama_tokens|
|-------------------------------------|---------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|--------|---------------|-----------------|-----------|----------------|-------|-------|-------|----------------|
|dim/oasst_en |https://huggingface.co/datasets/dim/oasst_en |OpenAssistant Conversations Dataset на английском языке, который был вручную отфильтрован мной. В исходном датасете около 30% диалогов оказались не корректными. Иногда пользователь, играющий роль ассистента, использовал грубый тон в общении с пользователем, иногда люди просто отвечали "не знаю" на вопросы, и некоторые из вопросов были недостаточно научными или слишком краткими. Вы можете ознакомиться с этой разметкой по следующей ссылке: https://docs.google.com/spreadsheets/d/117t5-Tr-dxdODpyFBkBg5R8GklYBlsvBfeDyjqwz2pA/edit?usp=sharing|2023-04-12_oasst_ready.messages.jsonl.gz |https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/2023-04-12_oasst_ready.messages.jsonl.gz|https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/oasst |en |2289 |468.6788991 |295.0864391|17 |264 |410 |618 |2332 |
|dim/oasst_ru |https://huggingface.co/datasets/dim/oasst_ru |OpenAssistant Conversations Dataset на русском языке, который был вручную отфильтрован мной. В исходном датасете около 30% диалогов оказались не корректными. Иногда пользователь, играющий роль ассистента, использовал грубый тон в общении с пользователем, иногда люди просто отвечали "не знаю" на вопросы, и некоторые из вопросов были недостаточно научными или слишком краткими. Вы можете ознакомиться с этой разметкой по следующей ссылке: https://docs.google.com/spreadsheets/d/1uiOnqxiytuxrB6u6q2pMSdnMfqjT3arfg8DlT-OWlb0/edit?usp=sharing |2023-04-12_oasst_ready.messages.jsonl.gz |https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/2023-04-12_oasst_ready.messages.jsonl.gz|https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/oasst |ru |2220 |589.6112613 |479.835392 |7 |278 |465 |763.5 |5028 |
|dim/lima |https://huggingface.co/datasets/dim/lima |Данный датасет включает в себя 1000 высококачественных обучающих примеров на английском языке. Он собран из различных источников, включая Stack Exchange (STEM), Stack Exchange (Other), wikiHow, Pushshift r/WritingPrompts, Natural Instructions, а также уникальные инструкции, созданные авторами статей. Более подробную информацию о датасете можно найти в [соответствующей статье](https://arxiv.org/pdf/2305.11206.pdf). |GAIR/lima |https://huggingface.co/datasets/GAIR/lima |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/lima |en |1030 |712.9456311 |671.179319 |29 |312.75 |488.5 |825 |3920 |
|dim/logic_tasks_ru |https://huggingface.co/datasets/dim/logic_tasks_ru |Данный набор задач по логике для детей взят с веб-сайта https://www.potehechas.ru/zadachi/zadachi.shtml. |Логические задачи - Логика и нестандартное мышление |https://www.potehechas.ru/zadachi/zadachi.shtml |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/logic_tasks_ru |ru |86 |193.0697674 |76.69048422|58 |133.75 |185 |243.5 |432 |
|dim/wikihow_en |https://huggingface.co/datasets/dim/wikihow_en |Данный датасет содержит англоязычные статьи, извлеченные с веб-сайта Wikihow. |0x22almostEvil/multilingual-wikihow-qa-16k |https://huggingface.co/datasets/0x22almostEvil/multilingual-wikihow-qa-16k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/wiki_how |en |1995 |2037.86416 |870.1910713|265 |1463 |1913 |2461.5 |8988 |
|dim/wikihow_ru |https://huggingface.co/datasets/dim/wikihow_ru |Данный датасет включает в себя русскоязычные статьи, полученные с веб-сайта Wikihow. |0x22almostEvil/multilingual-wikihow-qa-16k |https://huggingface.co/datasets/0x22almostEvil/multilingual-wikihow-qa-16k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/wiki_how |ru |2058 |2498.119534 |1587.851549|139 |1236.25|2264 |3421.75|10217 |
|dim/essayforum_writing_prompts_6k |https://huggingface.co/datasets/dim/essayforum_writing_prompts_6k |Данный датасет включает в себя запросы на помощь с написанием небольших эссе, размещенные на данном сайте. Ответы в датасете предоставлены исключительно главным администратором сайта. Его ответы были отобраны, поскольку чаще всего они являются наиболее качественными и вдумчивыми. |EssayForum |https://essayforum.com/writing/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/essayforum |en |6361 |783.1760729 |285.4314176|258 |629 |742 |879 |4966 |
|dim/sharegpt_short_ru |https://huggingface.co/datasets/dim/sharegpt_short_ru |Очищенная версия русская версия sharegpt. Я попытался вырезать из текста все промпты, где модель извиняется что что-то не может сделать, что она не имеет доступа в интернет. Диалоги, которые противоречат морали модели я просто исключил. Постарался убрать упоминания о том что она модель AI, так как за ролеплейные характеристики отвечают другие датасеты. |RyokoAI/ShareGPT52K |https://huggingface.co/datasets/RyokoAI/ShareGPT52K |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/sharegpt |ru |253 |706.6521739 |494.7437584|13 |310 |628 |1078 |1861 |
|dim/openreview_prompts_65 |https://huggingface.co/datasets/dim/openreview_prompts_65 |Датасет рецензий на реальные научные статьи с сайта openreview. Вышло на самом деле не так много, так как многие статьи не выложенны на arxiv или просто не имеют рецензий. Плюс я собрал только малую часть данного сайта, а не все что там было. |https://openreview.net/ |https://openreview.net/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/openreview |en |150 |13531.51333 |6966.623686|4893 |8279 |12648.5|15833.5|41494 |
|dim/roleplay_instruct_v2_final |https://huggingface.co/datasets/dim/roleplay_instruct_v2_final |Датасет ролеплея от GPT-4 на различных персонажей на английском языке. |roleplay-instruct-v2-final |https://github.com/teknium1/GPTeacher |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/gpt_roleplay_realm |en |7188 |155.1413467 |97.71215667|14 |88 |125 |192 |1291 |
|dim/kinomania_scripts |https://huggingface.co/datasets/dim/kinomania_scripts |Небольшой датасет, который содержит в себе сценарии фильмов целиком и их краткое содержание |https://www.kinomania.ru/scripts |https://www.kinomania.ru/scripts |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/kinomania_scripts |ru\en |27 |2603.407407 |510.375447 |1887 |2175 |2370 |3069 |3616 |
|dim/bugurt_thread_prompts |https://huggingface.co/datasets/dim/bugurt_thread_prompts |Небольшой набор размеченных бугуртов вместе с моим другом, для того чтобы модель научилась писать бугурты на конкретную ситуацию. Собраны из телеграм паблика БУГУРТ ТРЕД(https://t.me/bugurtthread) |https://t.me/bugurtthread |https://t.me/bugurtthread |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/bugurt_thread |ru |223 |334.4529148 |271.2557988|48 |148.5 |254 |434.5 |1645 |
|dim/russian_lyrics_prompts |https://huggingface.co/datasets/dim/russian_lyrics_prompts |Небольшой датасет промптов собранный мною из различных учебников по стихосложению, чтобы модель научилась писать стихи, используя необходимый литературный прием на конкретную тему. |Учебник стихосложения |https://stihi.ru/uchebnik/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/russian_lyrics_prompts |ru |43 |106.1395349 |71.00220701|45 |71 |83 |96.5 |411 |
|dim/ru_instruct_gpt4 |https://huggingface.co/datasets/dim/ru_instruct_gpt4 |Датасет каких-то инструкций на русском сгенерированных GPT-4 |lksy/ru_instruct_gpt4 |https://huggingface.co/datasets/lksy/ru_instruct_gpt4 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ru_instruct_gpt4 |ru |14222 |259.2173393 |237.9433891|16 |109 |175 |271 |1374 |
|dim/gpt_roleplay_realm |https://huggingface.co/datasets/dim/gpt_roleplay_realm |Диалоги выдуманных персонажей при помощи GPT-4, диалоги были сгенерированны при помощи GPT-3.5. Русский и английский. |IlyaGusev/gpt_roleplay_realm |https://huggingface.co/datasets/IlyaGusev/gpt_roleplay_realm |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/gpt_roleplay_realm |ru\en |8700 |504.2424138 |117.6228987|180 |424 |489 |569 |1207 |
|dim/ultrachat_ru |https://huggingface.co/datasets/dim/ultrachat_ru |Какой-то рандомный датасет диалогов от chatgpt, который я нашел на huggingface. Из текста диалогов были вырезаны шаблонные фразы по типу: "я не могу выполнить", "как языковая модель" и тд. Потому что обычно после этого следовало вменяемое решение задачи. |kaleinaNyan/UltraChat_ru |https://huggingface.co/datasets/kaleinaNyan/UltraChat_ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ultrachat_ru |ru |500 |1781.782 |901.1212735|267 |1113.25|1648 |2250.25|7303 |
|dim/scitldr |https://huggingface.co/datasets/dim/scitldr |Саммаризация научных статей на английском языке, выполненная экспертами. |allenai/scitldr |https://huggingface.co/datasets/allenai/scitldr |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/scitldr |en |3229 |258.748529 |71.41209752|60 |209 |252 |303 |689 |
|dim/linux_man_pages_tldr_summarized |https://huggingface.co/datasets/dim/linux_man_pages_tldr_summarized |Саммаризация мануалов для инструментов линукс в удобный набор команд с их кратким описанием. |tmskss/linux-man-pages-tldr-summarized |https://huggingface.co/datasets/tmskss/linux-man-pages-tldr-summarized |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/linux-man-pages-tldr-summarized |en |481 |1567.727651 |3590.30871 |96 |405 |765 |1386 |49888 |
|dim/dolphin_ru_3k |https://huggingface.co/datasets/dim/dolphin_ru_3k |Подвыборка размера 3000 переведенных заданий dolphin. Примеры из оригинального датасета это промпты из FLANv2 и решения при помощи GPT-4 или GPT-3.5. |d0rj/dolphin-ru |https://huggingface.co/datasets/d0rj/dolphin-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/dolphin_ru |ru |3000 |556.1133333 |650.0962612|19 |207 |369.5 |720.25 |6787 |
|dim/runne_prompts |https://huggingface.co/datasets/dim/runne_prompts |Промпты составленные из датасета RuNNE. Лично я при обучении сотавил промпт следующим образом. Сначала идет текст "Найди все именованные сущности в данном тексте:", а затем шел сам текст. В качестве выхода модели нужно сгенерировать JSON где содержатся все найденные именованные сущности. К примеру так [{"name": "PERSON", "ent": "Ким Чен Нама", "pos": "0 12"}, {"name": "ORGANIZATION", "ent": "Полиция Малайзии", "pos": "56 72"}] |iluvvatar/RuNNE |https://huggingface.co/datasets/iluvvatar/RuNNE |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/RuNNE |ru |537 |1479.750466 |230.0259174|581 |1337 |1480 |1635 |1988 |
|dim/lurk_prompts |https://huggingface.co/datasets/dim/lurk_prompts |Набор определений различных терминов с сайта lurk. Сами промпты были составлены автоматически следующим образом. напиши определение для (ОПРЕДЕЛЕНИЕ) в стиле lurk |averoo/lurk |https://huggingface.co/datasets/averoo/lurk/viewer/default/train?p=2 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/lurk |ru |5671 |3450.34262 |4147.897824|35 |710.5 |2010 |4593 |55098 |
|dim/panorama_prompts_10k |https://huggingface.co/datasets/dim/panorama_prompts_10k |Набор юмористических заголовков и текстов новостей с сайта панорама. |its5Q/panorama |https://huggingface.co/datasets/its5Q/panorama |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/panorama |ru |11024 |516.9588171 |191.3774023|36 |422 |498 |585 |3496 |
|dim/resh_edu_short_prompts |https://huggingface.co/datasets/dim/resh_edu_short_prompts |Набор уроков с сайта resh.edu.ru включающих в себя название урока, тему, класс и текст урока с заданиями. |its5Q/resh-edu |https://huggingface.co/datasets/its5Q/resh-edu |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/resh_edu |ru |2106 |1431.510921 |435.7847102|56 |1175.5 |1517 |1777 |2029 |
|dim/databricks_dolly_15k_ru |https://huggingface.co/datasets/dim/databricks_dolly_15k_ru |Переведенный датасет dolly на русский язык. Включает в себя набор инструкций на обширное количество тематик. |dwarf2/databricks-dolly-15k-ru |https://huggingface.co/dwarf2/databricks-dolly-15k-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/databricks_dolly_15k_ru |ru |14914 |305.4638595 |405.874049 |8 |87 |182 |370 |9268 |
|dim/databricks_dolly_15k_en |https://huggingface.co/datasets/dim/databricks_dolly_15k_en |databricks-dolly-15k — это набор данных с открытым исходным кодом, содержащий записи о выполнении инструкций, созданные тысячами сотрудников Databricks в нескольких поведенческих категориях, изложенных в документе InstructGPT, включая мозговой штурм, классификацию, закрытый контроль качества, генерацию, извлечение информации, открытый контроль качества и обобщение. |databricks/databricks-dolly-15k |https://huggingface.co/datasets/databricks/databricks-dolly-15k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/databricks_dolly_15k_en |en |15011 |204.7264006 |302.5539423|6 |57 |119 |242 |8883 |
|dim/grammarly_coedit |https://huggingface.co/datasets/dim/grammarly_coedit |Набор промптов, которые просят исправить грамматические, стилистические ошибки на английском. |grammarly/coedit |https://huggingface.co/datasets/grammarly/coedit |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/grammarly_coedit |en |82466 |53.7128271 |26.73822864|10 |35 |46 |64 |694 |
|dim/kinopoisk_prompts |https://huggingface.co/datasets/dim/kinopoisk_prompts |Отзывы с кинопоиска на топ 250 фильмов. В промптах я прошу написать хороший, плохой или нейтральный отзыв на определенный фильм. |blinoff/kinopoisk |https://huggingface.co/datasets/blinoff/kinopoisk |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/kinopoisk |ru |36591 |875.0955973 |565.3212035|48 |484 |733 |1117 |8628 |
|dim/medical_qa_ru_prompts |https://huggingface.co/datasets/dim/medical_qa_ru_prompts |Какие-то вопросы и ответы с какого-то медицинского форума. В данной версии датасета только первый ответ из оригинала. |blinoff/medical_qa_ru_data |https://huggingface.co/datasets/blinoff/medical_qa_ru_data |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/medical_qa_ru_data |ru |80101 |206.710528 |175.4343973|12 |106 |161 |247 |5062 |
|dim/joke_explaination_prompts |https://huggingface.co/datasets/dim/joke_explaination_prompts |Объяснение шуток на английском. От изначального датасета отличается тем, что я убрал последнее предложение из объяснения, так как оно ссылается на видео на сайте. |theblackcat102/joke_explaination |https://huggingface.co/datasets/theblackcat102/joke_explaination |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/joke_explaination |en |364 |143.5741758 |68.90275411|21 |99 |137.5 |189.25 |334 |
|dim/oa_stackexchange_200k |https://huggingface.co/datasets/dim/oa_stackexchange_200k |Вопросы-ответы со stackexchange. Оригинальный датасет был составлен следующим образом: были выбраны только темы с принятым ответом, для которых длина вопроса и ответа составляет менее 1000 символов. Другие ответы, вопросы без принятых ответов или длинные записи были удалены. Так как оригинальный датасет слишком большой, я рандомно выбрал 200k семплов. |donfu/oa-stackexchange |https://huggingface.co/datasets/donfu/oa-stackexchange |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/oa_stackexchange |en |200000 |276.29862 |112.5004436|22 |194 |265 |345 |1226 |
|dim/scale_helpful_no_math |https://huggingface.co/datasets/dim/scale_helpful_no_math |Какой-то набор диалогов с вопросами-ответами на английском, происхождение неизвестно. |HuggingFaceH4/scale_helpful_no_math |https://huggingface.co/datasets/HuggingFaceH4/scale_helpful_no_math/viewer/default/train_rm |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/scale_helpful_no_math |en |17095 |1235.302603 |838.1097885|53 |663 |1063 |1617 |34480 |
|dim/law_stackexchange_prompts |https://huggingface.co/datasets/dim/law_stackexchange_prompts |Вопросы про закон на английском языке со StackExchange. Оригинальный датасет был преобразован в markdown. |ymoslem/Law-StackExchange |https://huggingface.co/datasets/ymoslem/Law-StackExchange |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/law_stackexchange |en |24343 |689.1184324 |565.0316906|43 |354 |540 |836 |8969 |
|dim/ficbook_prompts_best_10k |https://huggingface.co/datasets/dim/ficbook_prompts_best_10k |Топ 10k лучших фанфиков с сайта ficbook.net. Все промпты выглядят следующим образом: напиши фанфик с названием {title} и следующим описанием {description}, с тегами {tags}, Где title это оригинальное название, description оригинальное описание, tags это теги данного произведения. |AlexWortega/FicBook |https://huggingface.co/datasets/AlexWortega/FicBook |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ficbook |ru |10000 |1737.8214 |402.0748161|166 |1716 |1950 |1950 |1952 |
|dim/azbyka_logic_ru |https://huggingface.co/datasets/dim/azbyka_logic_ru |Небольшой набор детских логических и православных задач, взятых с сайта https://azbyka.ru/deti/logicheskie-i-zanimatelnye-zadachi . Обычно у них почти нет развернутого решения, только ответ. Я пытался расписать решение некоторых задач, но меня хватило только на 35, если кто-то займется подобным буду рад https://docs.google.com/spreadsheets/d/1JRbtppbZCUbV_Eqd0nKbRDQEuPnJIAgJ70cUILEDUI4/edit?usp=sharing . |Логические и занимательные задачи (300 задач) |https://azbyka.ru/deti/logicheskie-i-zanimatelnye-zadachi |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/azbyka_logic_ru |ru |480 |77.4375 |77.56990416|14 |31 |50 |91 |652 |
|dim/povarenok |https://huggingface.co/datasets/dim/povarenok |46k лучших рецептов с сайта povarenok.ru, содержит текст рецепта, список ингридиентов, название блюда |https://www.povarenok.ru/recipes/ |https://www.povarenok.ru/recipes/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/povarenok |ru |46500 |488.9118495 |344.8563249|31 |281 |440 |632 |5542 |
|dim/AO3_fandom_chatbot_1to1 |https://huggingface.co/datasets/dim/AO3_fandom_chatbot_1to1 |Какой-то набор ролеплейных диалогов с описанием персонажей и их отыгрышем. Происхождение неизвестно. |ebony59/AO3_fandom_chatbot_1to1 |https://huggingface.co/datasets/ebony59/AO3_fandom_chatbot_1to1 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/AO3_fandom_chatbot_1to1 |en |614 |493.7166124 |226.3885365|129 |328.25 |432.5 |611.75 |1272 |
|dim/habr_prompts_5k |https://huggingface.co/datasets/dim/habr_prompts_5k |Статьи с хабра. Датасет был составлен с помощью chatgpt, chatgpt преобразовывал заголовки таким образом чтобы они звучали как вопросы от пользователя, в качестве таргета выступала сама статья. |IlyaGusev/habr |https://huggingface.co/datasets/IlyaGusev/habr |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/habr |ru |5000 |1732.892 |454.8418369|19 |1920.75|1950 |1951 |1952 |
|dim/what_where_when_50k |https://huggingface.co/datasets/dim/what_where_when_50k |50k вопросов с решениями с сайта что где когда. В качестве промпта выступает вопрос, в качестве ответа конкатенация объяснения и краткого ответа. Все вопросы-ответы вы можете найти по этой ссылке https://huggingface.co/datasets/dim/what_where_when_ru |https://db.chgk.info |https://db.chgk.info |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/what_where_when |ru |50000 |169.1862 |68.91119898|18 |122 |158 |202 |1167 |
|dim/competition_math |https://huggingface.co/datasets/dim/competition_math |Датасет олимпиадной математики на английском. The Mathematics Aptitude Test of Heuristics (MATH) dataset. |competition_math |https://huggingface.co/datasets/competition_math |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/competition_math |en |7500 |317.5254667 |267.8583731|34 |147 |234 |393 |3029 |
|dim/sharegpt_short_en_30k |https://huggingface.co/datasets/dim/sharegpt_short_en_30k |Короткие диалоги на английском из sharegpt |RyokoAI/ShareGPT52K |https://huggingface.co/datasets/RyokoAI/ShareGPT52K |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/sharegpt |en |29597 |749.3149981 |516.3702473|3 |336 |630 |1095 |2021 |
|dim/ru_turbo_alpaca_evol_instruct |https://huggingface.co/datasets/dim/ru_turbo_alpaca_evol_instruct |Набор инструкций различной тематики на русском языке, сгенерированных при помощи chatgpt. |IlyaGusev/ru_turbo_alpaca_evol_instruct |https://huggingface.co/datasets/IlyaGusev/ru_turbo_alpaca_evol_instruct |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ru_turbo_alpaca_evol_instruct |ru |47793 |453.0887996 |289.5498356|17 |221 |430 |623 |4647 |
|dim/ru_turbo_saiga |https://huggingface.co/datasets/dim/ru_turbo_saiga |Набор инструкций различной тематики на русском языке, сгенерированных при помощи chatgpt. |IlyaGusev/ru_turbo_saiga |https://huggingface.co/datasets/IlyaGusev/ru_turbo_saiga |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ru_turbo_saiga |ru |37699 |412.7508687 |113.346917 |87 |339 |398 |466 |1427 |
|dim/bugurt_completion_prompts |https://huggingface.co/datasets/dim/bugurt_completion_prompts |Обрезанные бугурты, где в качестве промпта используется строка вида - продолжи бугурт: первая строчка бугурта |https://t.me/bugurtthread |https://t.me/bugurtthread |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/bugurt_thread |ru |5000 |280.2466 |320.4353681|32 |111 |178 |331 |11333 |
|dim/tldr_17_50k |https://huggingface.co/datasets/dim/tldr_17_50k |Очень вольная абстрактная саммаризация постов с реддита в одну строчку |webis/tldr-17 |https://huggingface.co/datasets/webis/tldr-17 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/tldr_17 |en |50000 |421.12752 |403.346214 |10 |177 |303 |525 |9592 |
|dim/grade_school_math_instructions |https://huggingface.co/datasets/dim/grade_school_math_instructions |OpenAI's grade-school-math датасет преобразованный в промпты. |qwedsacf/grade-school-math-instructions |https://huggingface.co/datasets/qwedsacf/grade-school-math-instructions |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/grade-school-math-instructions |en |8792 |171.6310282 |63.09232668|50 |124 |161 |206 |511 |
|dim/tldr_news |https://huggingface.co/datasets/dim/tldr_news |Хедлайны и текст новостей на различную тематику. |JulesBelveze/tldr_news |https://huggingface.co/datasets/JulesBelveze/tldr_news |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/tldr_news |en |7138 |133.1004483 |46.48736493|23 |100 |133 |161 |476 |
|dim/grade_school_math_instructions_ru|https://huggingface.co/datasets/dim/grade_school_math_instructions_ru|OpenAI's grade-school-math датасет переведенный на русский. |d0rj/gsm8k-ru |https://huggingface.co/datasets/d0rj/gsm8k-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/grade_school_math_instructions_ru|ru |7473 |259.8321959 |100.1229127|78 |185 |241 |314 |838 |
|dim/dialogsum |https://huggingface.co/datasets/dim/dialogsum |Саммаризация диалогов на английском языке, разметка выполнялась вручную. |knkarthick/dialogsum |https://huggingface.co/datasets/knkarthick/dialogsum |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/dialogsum |en |12460 |269.6467095 |126.285664 |75 |191 |245 |327 |1725 |
|dim/HC3_ru |https://huggingface.co/datasets/dim/HC3_ru |Вопросы-ответы с реддита, есть ответы сгенерированные chatgpt и реальные ответы пользователей. Я использовал только реальные ответы пользователей. |d0rj/HC3-ru |https://huggingface.co/datasets/d0rj/HC3-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/HC3_ru |ru |24322 |360.5608503 |330.2285903|15 |168 |267 |435 |10025 |
|dim/horoscopes_ru_10k |https://huggingface.co/datasets/dim/horoscopes_ru_10k |10k гороскопов, с промптами где я прошу сгенерировать гороском для определенного знака зодиака |dkagramanyan/horoscopes_ru |https://huggingface.co/datasets/dkagramanyan/horoscopes_ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/horoscopes_ru |ru |10000 |183.1443 |31.62023184|55 |159 |187 |201 |464 |
|dim/yandex_q_200k |https://huggingface.co/datasets/dim/yandex_q_200k |200k рандомно выбранных вопросов-ответов с сайта yandex q. |its5Q/yandex-q |https://huggingface.co/datasets/its5Q/yandex-q |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/yandex_q |ru |200000 |304.569005 |340.7808288|18 |127 |202 |353 |19294 |
|dim/leetcodesolutions_en_2k |https://huggingface.co/datasets/dim/leetcodesolutions_en_2k |Решения задач с leetcode на разных языках. |TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k |https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/leetcodesolutions_en_2k |en |2048 |740.7441406 |253.2493282|297 |565 |685 |857 |1960 |
|dim/forum_uristov_rf_prompts |https://huggingface.co/datasets/dim/forum_uristov_rf_prompts |Вопросы-ответы с российского юридического форума. |https://xn----dtbrojdkckkfj9k.xn--p1ai/vopros-yuristu?page=560|https://xn----dtbrojdkckkfj9k.xn--p1ai/vopros-yuristu?page=560 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/forum_uristov_rf |ru |1849 |321.0540833 |429.58896 |31 |134 |210 |349 |6470 |
|dim/dialogsum_ru |https://huggingface.co/datasets/dim/dialogsum_ru |Саммаризация диалогов на русском языке, перевод dialogsum. |d0rj/dialogsum-ru |https://huggingface.co/datasets/d0rj/dialogsum-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/dialogsum-ru |ru |12460 |364.2813804 |178.7117754|98 |250 |329 |446 |2300 |
|dim/huggingartists_prompts |https://huggingface.co/datasets/dim/huggingartists_prompts |Промпты, которые просят продолжить песню в стиле определенного исполнителя. В данном наборе содержатся почти все исполнители, которых вы можете найти в этой организации https://huggingface.co/huggingartists |https://huggingface.co/huggingartists |https://huggingface.co/huggingartists |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/huggingartists |ru |64006 |561.6732025 |586.18458 |28 |297 |453 |720 |32949 |
### Модели
На данный момент обучаются 3 модели llama2_7b, llama2_13b и llama1_30b.
За графиками их обучения можно следить в прямом эфире https://api.wandb.ai/links/dimweb/7rh0c7iz
### Код обучения
- [общий алгоритм обучения](https://github.com/dmitrymailk/verbalist/blob/master/verbalist/model/src/train.py)
- [формирование датасетов для обучения](https://github.com/dmitrymailk/verbalist/blob/master/verbalist/model/src/dataset.py#L176)
### Оборудование
Все обучение и инференс производится на видеокарте A100, на других видеокартах была обнаружена существенная деградация качества при инференсе, данный аспект требует дополнительного изучения.
- NVIDIA A100-SXM4-40GB
- NVIDIA-SMI 535.54.03
- Driver Version: 535.54.03
- CUDA Version: 12.2
- torch==2.0.1+cu118
### Дальнейшее развитие
Самое простое, что можно сделать это переводить уже имеющиеся хорошие датасеты с английского на русский при помощи GPT-4.
Более сложное это собирать больше разнообразных данных из различных доменов. Я могу лишь подкинуть идеи для того какие датасеты можно собрать еще.
- решебники по литературе, русскому и другим предметам
- задания со всяких бирж труда
- [краткие пересказы произведений, анализ произведений, сочинения по ним](http://www.litra.ru/shortwork/)
- [туториалы с digital ocean (более 7000)](https://www.digitalocean.com/community/tutorials)
- [туториалы с selectel](https://selectel.ru/blog/tutorials/)
- больше форумов на различные тематики
- [бесплатные эссе с ivypanda essays](https://ivypanda.com/essays/) и дальнейший их перевод на русский
- больше стихов и песен
- [олимпиадные русские задачи](https://math.ru/problems/) их очень сложно собирать, так как большинство их них живут только в PDF или docx. Но их довольно много и они довольно отличаются от олимпиадной математики на английском. Но у меня нет времени этим заниматься.
- фанфики на иностранном языке
- исправить текущие автоматические промпты на более разнообразные, при помощи chatgpt | [
-0.6083930730819702,
-0.6346015334129333,
0.08716481924057007,
0.23757725954055786,
-0.09478584676980972,
0.08828531950712204,
-0.2985820472240448,
-0.2811245322227478,
0.8157870173454285,
0.17843987047672272,
-0.6896222829818726,
-0.7539437413215637,
-0.6736902594566345,
-0.00656013051047... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
miulab/tmlu | miulab | 2023-11-28T05:08:48Z | 18 | 0 | null | [
"task_categories:question-answering",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:zh",
"region:us"
] | 2023-11-28T05:08:48Z | 2023-10-09T11:15:13.000Z | 2023-10-09T11:15:13 | ---
task_categories:
- question-answering
- text-classification
language:
- zh
pretty_name: TMLU
size_categories:
- 1K<n<10K
configs:
- config_name: AST_chinese
data_files:
- split: test
path: "AST_chinese_test.jsonl"
- split: dev
path: "AST_chinese_dev.jsonl"
- config_name: AST_biology
data_files:
- split: test
path: "AST_biology_test.jsonl"
- split: dev
path: "AST_biology_dev.jsonl"
- config_name: AST_chemistry
data_files:
- split: test
path: "AST_chemistry_test.jsonl"
- split: dev
path: "AST_chemistry_dev.jsonl"
- config_name: AST_physics
data_files:
- split: test
path: "AST_physics_test.jsonl"
- split: dev
path: "AST_physics_dev.jsonl"
- config_name: AST_civics
data_files:
- split: test
path: "AST_civics_test.jsonl"
- split: dev
path: "AST_civics_dev.jsonl"
- config_name: AST_geography
data_files:
- split: test
path: "AST_geography_test.jsonl"
- split: dev
path: "AST_geography_dev.jsonl"
- config_name: AST_history
data_files:
- split: test
path: "AST_history_test.jsonl"
- split: dev
path: "AST_history_dev.jsonl"
- config_name: GSAT_chinese
data_files:
- split: test
path: "GSAT_chinese_test.jsonl"
- split: dev
path: "GSAT_chinese_dev.jsonl"
- config_name: GSAT_chemistry
data_files:
- split: test
path: "GSAT_chemistry_test.jsonl"
- split: dev
path: "GSAT_chemistry_dev.jsonl"
- config_name: GSAT_biology
data_files:
- split: test
path: "GSAT_biology_test.jsonl"
- split: dev
path: "GSAT_biology_dev.jsonl"
- config_name: GSAT_physics
data_files:
- split: test
path: "GSAT_physics_test.jsonl"
- split: dev
path: "GSAT_physics_dev.jsonl"
- config_name: GSAT_chemistry
data_files:
- split: test
path: "GSAT_earth_science_test.jsonl"
- split: dev
path: "GSAT_earth_science_dev.jsonl"
- config_name: GSAT_mathematics
data_files:
- split: test
path: "GSAT_mathematics_test.jsonl"
- split: dev
path: "GSAT_mathematics_test.jsonl"
- config_name: GSAT_geography
data_files:
- split: test
path: "GSAT_geography_test.jsonl"
- split: dev
path: "GSAT_geography_dev.jsonl"
- config_name: GSAT_history
data_files:
- split: test
path: "GSAT_history_test.jsonl"
- split: dev
path: "GSAT_history_dev.jsonl"
- config_name: GSAT_civics
data_files:
- split: test
path: "GSAT_civics_test.jsonl"
- split: dev
path: "GSAT_civics_dev.jsonl"
- config_name: CAP_biology
data_files:
- split: test
path: "CAP_biology_test.jsonl"
- split: dev
path: "CAP_biology_dev.jsonl"
- config_name: CAP_physics
data_files:
- split: test
path: "CAP_physics_test.jsonl"
- split: dev
path: "CAP_physics_dev.jsonl"
- config_name: CAP_chemistry
data_files:
- split: test
path: "CAP_chemistry_test.jsonl"
- split: dev
path: "CAP_chemistry_dev.jsonl"
- config_name: CAP_earth_science
data_files:
- split: test
path: "CAP_earth_science_test.jsonl"
- split: dev
path: "CAP_earth_science_dev.jsonl"
- config_name: Driving_Rule
data_files:
- split: test
path: "Driving_Rule_test.jsonl"
- split: dev
path: "Driving_Rule_dev.jsonl"
- config_name: Basic_Traditional_Chinese_Medicine
data_files:
- split: test
path: "Basic_Traditional_Chinese_Medicine_test.jsonl"
- split: dev
path: "Basic_Traditional_Chinese_Medicine_dev.jsonl"
- config_name: Clinical_Traditional_Chinese_Medicine
data_files:
- split: test
path: "Clinical_Traditional_Chinese_Medicine_test.jsonl"
- split: dev
path: "Clinical_Traditional_Chinese_Medicine_dev.jsonl"
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
- AST: 分科測驗(110前指考)
- GSAT: 學科能力測驗
- CAP: 國中教育會考
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
### Evaluation
#### GSAT
##### ChatGPT
Total: 217 / 534 (0.4064)
| Subject | Accuracy | correct / total |
|:------------- | -------- |:--------------- |
| chinese | 0.328 | 41 / 125 |
| mathematics | 0.157 | 8 / 51 |
| physics | 0.353 | 12 / 34 |
| chemistry | 0.268 | 11 / 41 |
| biology | 0.377 | 20 / 53 |
| earth science | 0.257 | 9 / 35 |
| geography | 0.545 | 24 / 44 |
| history | 0.605 | 49 / 81 |
| civics | 0.614 | 43 / 70 |
##### Claude-Instant-1
Total: 253 / 534 (0.4738)
| Subject | Accuracy | correct / total |
|:------------- | -------- |:--------------- |
| chinese | 0.432 | 54 / 125 |
| mathematics | 0.137 | 7 / 51 |
| physics | 0.441 | 15 / 34 |
| chemistry | 0.220 | 9 / 41 |
| biology | 0.491 | 26 / 53 |
| earth science | 0.343 | 12 / 35 |
| geography | 0.682 | 30 / 44 |
| history | 0.716 | 58 / 81 |
| civics | 0.600 | 42 / 70 |
##### Claude-2
Total: 258 / 534 (0.4831)
| Subject | Accuracy | correct / total |
|:------------- | -------- |:--------------- |
| chinese | 0.480 | 60 / 125 |
| mathematics | 0.118 | 6 / 51 |
| physics | 0.500 | 17 / 34 |
| chemistry | 0.341 | 14 / 41 |
| biology | 0.472 | 25 / 53 |
| earth science | 0.371 | 13 / 35 |
| geography | 0.682 | 30 / 44 |
| history | 0.679 | 55 / 81 |
| civics | 0.543 | 38 / 70 |
#### AST
##### ChatGPT
Total: 197 / 492 (0.4004)
| Subject | Accuracy | correct / total |
| --------- | -------- | --------------- |
| chinese | 0.354 | 56 / 158 |
| physics | 0.255 | 13 / 51 |
| chemistry | 0.176 | 9 / 51 |
| biology | 0.381 | 24 / 63 |
| geography | 0.510 | 27 / 53 |
| history | 0.724 | 42 / 58 |
| civics | 0.448 | 26 / 58 |
##### Claude-Instant-1
Total: 226 / 492 (0.4593)
| Subject | Accuracy | correct / total |
|:--------- | -------- |:--------------- |
| chinese | 0.487 | 77 / 158 |
| physics | 0.216 | 11 / 51 |
| chemistry | 0.118 | 6 / 51 |
| biology | 0.349 | 22 / 63 |
| geography | 0.604 | 32 / 53 |
| history | 0.672 | 39 / 58 |
| civics | 0.672 | 39 / 58 |
##### Claude-2
Total: 212 / 492 (0.4309)
| Subject | Accuracy | correct / total |
|:--------- | -------- |:--------------- |
| chinese | 0.430 | 68 / 158 |
| physics | 0.216 | 11 / 51 |
| chemistry | 0.157 | 8 / 51 |
| biology | 0.365 | 23 / 63 |
| geography | 0.660 | 35 / 53 |
| history | 0.638 | 37 / 58 |
| civics | 0.517 | 30 / 58 |
#### CAP
##### ChatGPT
Total: 172 / 333 (0.5165)
| Subject | Accuracy | correct / total |
| --------- | -------- | --------------- |
| mathematics | 0.336 | 37 / 110 |
| physics | 0.5 | 5 / 10 |
| chemistry | 0.273 | 6 / 22 |
| earth science | 0.4 | 4 / 10 |
| biology | 0.455 | 10 / 22 |
| geography | 0.575 | 23 / 40 |
| history | 0.824 | 42 / 51 |
| civics | 0.662 | 45 / 68 |
##### Claude-Instant-1
Total: 180 / 333 (0.5405)
| Subject | Accuracy | correct / total |
| --------- | -------- | --------------- |
| mathematics | 0.264 | 29 / 110 |
| physics | 0.4 | 4 / 10 |
| chemistry | 0.455 | 10 / 22 |
| earth science | 0.4 | 4 / 10 |
| biology | 0.591 | 13 / 22 |
| geography | 0.65 | 26 / 40 |
| history | 0.843 | 43 / 51 |
| civics | 0.75 | 51 / 68 |
##### Claude-2
Total: 188 / 333 (0.5646)
| Subject | Accuracy | correct / total |
| --------- | -------- | --------------- |
| mathematics | 0.455 | 50 / 110 |
| physics | 0.7 | 7 / 10 |
| chemistry | 0.5 | 11 / 22 |
| earth science | 0.7 | 7 / 10 |
| biology | 0.682 | 15 / 22 |
| geography | 0.65 | 26 / 40 |
| history | 0.569 | 29 / 51 |
| civics | 0.632 | 43 / 68 |
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.5029324293136597,
-0.41928863525390625,
0.23142240941524506,
0.24116845428943634,
-0.29497626423835754,
-0.06505288183689117,
-0.028561672195792198,
-0.4465095102787018,
0.7451887726783752,
0.4179249405860901,
-0.48110780119895935,
-1.0086629390716553,
-0.7831316590309143,
0.07330283522... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/camel_ai_chemistry | dim | 2023-10-12T17:22:28Z | 18 | 1 | null | [
"region:us"
] | 2023-10-12T17:22:28Z | 2023-10-12T17:22:15.000Z | 2023-10-12T17:22:15 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
splits:
- name: train
num_bytes: 47000178
num_examples: 20000
download_size: 16918940
dataset_size: 47000178
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "camel_ai_chemistry"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.43131014704704285,
-0.15146532654762268,
-0.05142810940742493,
0.13162434101104736,
-0.08990608900785446,
0.08937535434961319,
0.27799439430236816,
-0.22469858825206757,
0.720935583114624,
0.3469119369983673,
-0.8099238276481628,
-0.9665715098381042,
-0.4119303226470947,
-0.238206028938... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
skrishna/challenging_toxic_samples | skrishna | 2023-10-19T13:42:15Z | 18 | 0 | null | [
"region:us"
] | 2023-10-19T13:42:15Z | 2023-10-19T13:41:36.000Z | 2023-10-19T13:41:36 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Isamu136/bk-sdm-small_generated_images_pokemon_blip | Isamu136 | 2023-10-19T15:26:07Z | 18 | 0 | null | [
"region:us"
] | 2023-10-19T15:26:07Z | 2023-10-19T15:25:22.000Z | 2023-10-19T15:25:22 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 33954051.0
num_examples: 833
download_size: 33930907
dataset_size: 33954051.0
---
# Dataset Card for "bk-sdm-small_generated_images_pokemon_blip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5408133268356323,
-0.0950404480099678,
0.31998294591903687,
0.45558029413223267,
-0.49712222814559937,
-0.09606928378343582,
0.2372853010892868,
-0.05855550989508629,
1.1165586709976196,
0.582787811756134,
-0.6626783013343811,
-0.7415040135383606,
-0.6091294288635254,
-0.007310419343411... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jihye-moon/klac_legal_aid_counseling | jihye-moon | 2023-10-20T03:49:28Z | 18 | 2 | null | [
"task_categories:conversational",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:ko",
"le",
"region:us"
] | 2023-10-20T03:49:28Z | 2023-10-20T03:26:51.000Z | 2023-10-20T03:26:51 | ---
task_categories:
- conversational
- text-classification
language:
- ko
tags:
- le
size_categories:
- 1K<n<10K
---
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
[법률구조공단](https://www.klac.or.kr/)의 법률구조상담 웹페이지를 크롤링하여 구축한 데이터셋 입니다.
| [
-0.20183777809143066,
-0.4536948800086975,
0.09165915101766586,
0.3712170720100403,
-0.5263810753822327,
0.08046068251132965,
0.151546910405159,
-0.019168365746736526,
0.7883638739585876,
0.7405740022659302,
-0.5652820467948914,
-0.8391963243484497,
-0.47165536880493164,
0.0124011784791946... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sijuade/Cats-Dogs-Birds | Sijuade | 2023-10-21T20:30:02Z | 18 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-21T20:30:02Z | 2023-10-21T20:25:43.000Z | 2023-10-21T20:25:43 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
splits:
- name: train
num_bytes: 2858440330.32
num_examples: 13344
download_size: 2752316017
dataset_size: 2858440330.32
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dtorres-zAgile/zc-misti-domain | dtorres-zAgile | 2023-10-26T04:18:25Z | 18 | 0 | null | [
"region:us"
] | 2023-10-26T04:18:25Z | 2023-10-26T04:18:23.000Z | 2023-10-26T04:18:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: title
dtype: string
- name: url
dtype: string
- name: summary
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 433813
num_examples: 122
download_size: 203951
dataset_size: 433813
---
# Dataset Card for "zc-misti-domain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7053691148757935,
-0.14377152919769287,
0.17276033759117126,
0.11093524843454361,
-0.24550902843475342,
-0.19483596086502075,
0.32101139426231384,
-0.1472141444683075,
0.6073744893074036,
0.34965917468070984,
-1.196219563484192,
-0.9469919800758362,
-0.4387379586696625,
-0.3643880486488... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ycchen/oasst_lima | ycchen | 2023-10-26T14:53:20Z | 18 | 0 | null | [
"region:us"
] | 2023-10-26T14:53:20Z | 2023-10-26T14:47:30.000Z | 2023-10-26T14:47:30 | ---
dataset_info:
features:
- name: conversations
sequence: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 7255984
num_examples: 4538
download_size: 4147275
dataset_size: 7255984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "oasst_lima"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4125171899795532,
-0.3884948492050171,
0.3495429456233978,
0.27234989404678345,
-0.45917001366615295,
-0.23004190623760223,
0.5322698354721069,
-0.26365408301353455,
1.0354254245758057,
0.5398810505867004,
-0.7428600192070007,
-0.8317611217498779,
-0.8505502939224243,
-0.266901701688766... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
slone/nllb-200-10M-sample | slone | 2023-11-20T13:15:10Z | 18 | 2 | null | [
"task_categories:translation",
"size_categories:1M<n<10M",
"language:ak",
"language:am",
"language:ar",
"language:awa",
"language:azj",
"language:bm",
"language:ban",
"language:be",
"language:bem",
"language:bn",
"language:bho",
"language:bjn",
"language:bug",
"language:bg",
"languag... | 2023-11-20T13:15:10Z | 2023-10-30T23:43:49.000Z | 2023-10-30T23:43:49 | ---
dataset_info:
features:
- name: laser_score
dtype: float64
- name: lang1
dtype: string
- name: text1
dtype: string
- name: lang2
dtype: string
- name: text2
dtype: string
- name: blaser_sim
dtype: float64
splits:
- name: train
num_bytes: 2279333006.0
num_examples: 9983398
download_size: 1825697094
dataset_size: 2279333006.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: odc-by
task_categories:
- translation
pretty_name: nllb-200-10M-sample
size_categories:
- 1M<n<10M
language:
- ak # aka_Latn Akan
- am # amh_Ethi Amharic
- ar # arb_Arab Modern Standard Arabic
- awa # awa_Deva Awadhi
- azj # azj_Latn North Azerbaijani
- bm # bam_Latn Bambara
- ban # ban_Latn Balinese
- be # bel_Cyrl Belarusian
- bem # bem_Latn Bemba
- bn # ben_Beng Bengali
- bho # bho_Deva Bhojpuri
- bjn # bjn_Latn Banjar (Latin script)
- bug # bug_Latn Buginese
- bg # bul_Cyrl Bulgarian
- ca # cat_Latn Catalan
- ceb # ceb_Latn Cebuano
- cs # ces_Latn Czech
- cjk # cjk_Latn Chokwe
- ckb # ckb_Arab Central Kurdish
- crh # crh_Latn Crimean Tatar
- da # dan_Latn Danish
- de # deu_Latn German
- dik # dik_Latn Southwestern Dinka
- dyu # dyu_Latn Dyula
- el # ell_Grek Greek
- en # eng_Latn English
- eo # epo_Latn Esperanto
- et # est_Latn Estonian
- ee # ewe_Latn Ewe
- fo # fao_Latn Faroese
- fj # fij_Latn Fijian
- fi # fin_Latn Finnish
- fon # fon_Latn Fon
- fr # fra_Latn French
- fur # fur_Latn Friulian
- ff # fuv_Latn Nigerian Fulfulde
- gaz # gaz_Latn West Central Oromo
- gd # gla_Latn Scottish Gaelic
- ga # gle_Latn Irish
- gl # glg_Latn Galician
- gn # grn_Latn Guarani
- gu # guj_Gujr Gujarati
- ht # hat_Latn Haitian Creole
- ha # hau_Latn Hausa
- he # heb_Hebr Hebrew
- hi # hin_Deva Hindi
- hne # hne_Deva Chhattisgarhi
- hr # hrv_Latn Croatian
- hu # hun_Latn Hungarian
- hy # hye_Armn Armenian
- ig # ibo_Latn Igbo
- ilo # ilo_Latn Ilocano
- id # ind_Latn Indonesian
- is # isl_Latn Icelandic
- it # ita_Latn Italian
- jv # jav_Latn Javanese
- ja # jpn_Jpan Japanese
- kab # kab_Latn Kabyle
- kac # kac_Latn Jingpho
- kam # kam_Latn Kamba
- kn # kan_Knda Kannada
- ks # kas_Arab Kashmiri (Arabic script)
- ks # kas_Deva Kashmiri (Devanagari script)
- ka # kat_Geor Georgian
- kk # kaz_Cyrl Kazakh
- kbp # kbp_Latn Kabiyè
- kea # kea_Latn Kabuverdianu
- mn # khk_Cyrl Halh Mongolian
- km # khm_Khmr Khmer
- ki # kik_Latn Kikuyu
- rw # kin_Latn Kinyarwanda
- ky # kir_Cyrl Kyrgyz
- kmb # kmb_Latn Kimbundu
- kmr # kmr_Latn Northern Kurdish
- kr # knc_Arab Central Kanuri (Arabic script)
- kr # knc_Latn Central Kanuri (Latin script)
- kg # kon_Latn Kikongo
- ko # kor_Hang Korean
- lo # lao_Laoo Lao
- lij # lij_Latn Ligurian
- li # lim_Latn Limburgish
- ln # lin_Latn Lingala
- lt # lit_Latn Lithuanian
- lmo # lmo_Latn Lombard
- ltg # ltg_Latn Latgalian
- lb # ltz_Latn Luxembourgish
- lua # lua_Latn Luba-Kasai
- lg # lug_Latn Ganda
- luo # luo_Latn Luo
- lus # lus_Latn Mizo
- lv # lvs_Latn Standard Latvian
- mag # mag_Deva Magahi
- mai # mai_Deva Maithili
- ml # mal_Mlym Malayalam
- mr # mar_Deva Marathi
- min # min_Latn Minangkabau (Latin script)
- mk # mkd_Cyrl Macedonian
- mt # mlt_Latn Maltese
- mni # mni_Beng Meitei (Bengali script)
- mos # mos_Latn Mossi
- mi # mri_Latn Maori
- my # mya_Mymr Burmese
- nl # nld_Latn Dutch
- nb # nob_Latn Norwegian Bokmål
- ne # npi_Deva Nepali
- nso # nso_Latn Northern Sotho
- nus # nus_Latn Nuer
- ny # nya_Latn Nyanja
- oc # oci_Latn Occitan
- ory # ory_Orya Odia
- pag # pag_Latn Pangasinan
- pa # pan_Guru Eastern Panjabi
- pap # pap_Latn Papiamento
- pbt # pbt_Arab Southern Pashto
- fa # pes_Arab Western Persian
- plt # plt_Latn Plateau Malagasy
- pl # pol_Latn Polish
- pt # por_Latn Portuguese
- prs # prs_Arab Dari
- qu # quy_Latn Ayacucho Quechua
- ro # ron_Latn Romanian
- rn # run_Latn Rundi
- ru # rus_Cyrl Russian
- sg # sag_Latn Sango
- sa # san_Deva Sanskrit
- sat # sat_Beng ?
- scn # scn_Latn Sicilian
- shn # shn_Mymr Shan
- si # sin_Sinh Sinhala
- sk # slk_Latn Slovak
- sl # slv_Latn Slovenian
- sm # smo_Latn Samoan
- sn # sna_Latn Shona
- sd # snd_Arab Sindhi
- so # som_Latn Somali
- st # sot_Latn Southern Sotho
- es # spa_Latn Spanish
- sc # srd_Latn Sardinian
- sr # srp_Cyrl Serbian
- ss # ssw_Latn Swati
- su # sun_Latn Sundanese
- sv # swe_Latn Swedish
- sw # swh_Latn Swahili
- szl # szl_Latn Silesian
- ta # tam_Taml Tamil
- taq # taq_Latn Tamasheq (Latin script)
- tt # tat_Cyrl Tatar
- te # tel_Telu Telugu
- tg # tgk_Cyrl Tajik
- tl # tgl_Latn Tagalog
- ti # tir_Ethi Tigrinya
- tpi # tpi_Latn Tok Pisin
- tn # tsn_Latn Tswana
- ts # tso_Latn Tsonga
- tk # tuk_Latn Turkmen
- tum # tum_Latn Tumbuka
- tr # tur_Latn Turkish
- tw # twi_Latn Twi
- tzm # tzm_Tfng Central Atlas Tamazight
- ug # uig_Arab Uyghur
- uk # ukr_Cyrl Ukrainian
- umb # umb_Latn Umbundu
- ur # urd_Arab Urdu
- uz # uzn_Latn Northern Uzbek
- vec # vec_Latn Venetian
- vi # vie_Latn Vietnamese
- war # war_Latn Waray
- wo # wol_Latn Wolof
- xh # xho_Latn Xhosa
- yi # ydd_Hebr Eastern Yiddish
- yo # yor_Latn Yoruba
- zh # zho_Hans Chinese (Simplified)
- zh # zho_Hant Chinese (Traditional)
- ms # zsm_Latn Standard Malay
- zu # zul_Latn Zulu
---
# Dataset Card for "nllb-200-10M-sample"
This is a sample of nearly 10M sentence pairs from the [NLLB-200](https://arxiv.org/abs/2207.04672)
mined dataset [allenai/nllb](https://huggingface.co/datasets/allenai/nllb),
scored with the model [facebook/blaser-2.0-qe](https://huggingface.co/facebook/blaser-2.0-qe)
described in the [SeamlessM4T](https://arxiv.org/abs/2308.11596) paper.
The sample is not random; instead, we just took the top `n` sentence pairs from each translation direction.
The number `n` was computed with the goal of upsamping the directions that contain underrepresented languages.
Nevertheless, the 187 languoids (language and script combinations) are not represented equally,
with most languoids totaling 36K to 200K sentences.
Over 60% of the sentence pairs have BLASER-QE score above 3.5.
This dataset can be used for fine-tuning massively multilingual translation models.
We suggest the following scenario:
- Filter the dataset by the value of `blaser_sim` (the recommended threshold is 3.0 or 3.5);
- Randomly swap the source/target roles in the sentence pairs during data loading;
- Use that data to augment the dataset while fine-tuning an NLLB-like model for a new translation direction,
in order to mitigate forgetting of all the other translation directions.
The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/).
By using this, you are also bound to the respective Terms of Use and License of the original source.
Citation:
- NLLB Team et al, *No Language Left Behind: Scaling Human-Centered Machine Translation*, Arxiv https://arxiv.org/abs/2207.04672, 2022.
- Seamless Communication et al, *SeamlessM4T — Massively Multilingual & Multimodal Machine Translation*, Arxiv https://arxiv.org/abs/2308.11596, 2023.
The following language codes are supported. The mapping between languages and codes can be found in the [NLLB-200 paper](https://arxiv.org/abs/2207.04672)
or in the [FLORES-200 repository](https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200).
```
aka_Latn amh_Ethi arb_Arab awa_Deva azj_Latn bam_Latn ban_Latn bel_Cyrl bem_Latn ben_Beng bho_Deva bjn_Latn
bug_Latn bul_Cyrl cat_Latn ceb_Latn ces_Latn cjk_Latn ckb_Arab crh_Latn dan_Latn deu_Latn dik_Latn dyu_Latn
ell_Grek eng_Latn epo_Latn est_Latn ewe_Latn fao_Latn fij_Latn fin_Latn fon_Latn fra_Latn fur_Latn fuv_Latn
gaz_Latn gla_Latn gle_Latn glg_Latn grn_Latn guj_Gujr hat_Latn hau_Latn heb_Hebr hin_Deva hne_Deva hrv_Latn
hun_Latn hye_Armn ibo_Latn ilo_Latn ind_Latn isl_Latn ita_Latn jav_Latn jpn_Jpan kab_Latn kac_Latn kam_Latn
kan_Knda kas_Arab kas_Deva kat_Geor kaz_Cyrl kbp_Latn kea_Latn khk_Cyrl khm_Khmr kik_Latn kin_Latn kir_Cyrl
kmb_Latn kmr_Latn knc_Arab knc_Latn kon_Latn kor_Hang lao_Laoo lij_Latn lim_Latn lin_Latn lit_Latn lmo_Latn
ltg_Latn ltz_Latn lua_Latn lug_Latn luo_Latn lus_Latn lvs_Latn mag_Deva mai_Deva mal_Mlym mar_Deva min_Latn
mkd_Cyrl mlt_Latn mni_Beng mos_Latn mri_Latn mya_Mymr nld_Latn nob_Latn npi_Deva nso_Latn nus_Latn nya_Latn
oci_Latn ory_Orya pag_Latn pan_Guru pap_Latn pbt_Arab pes_Arab plt_Latn pol_Latn por_Latn prs_Arab quy_Latn
ron_Latn run_Latn rus_Cyrl sag_Latn san_Deva sat_Beng scn_Latn shn_Mymr sin_Sinh slk_Latn slv_Latn smo_Latn
sna_Latn snd_Arab som_Latn sot_Latn spa_Latn srd_Latn srp_Cyrl ssw_Latn sun_Latn swe_Latn swh_Latn szl_Latn
tam_Taml taq_Latn tat_Cyrl tel_Telu tgk_Cyrl tgl_Latn tir_Ethi tpi_Latn tsn_Latn tso_Latn tuk_Latn tum_Latn
tur_Latn twi_Latn tzm_Tfng uig_Arab ukr_Cyrl umb_Latn urd_Arab uzn_Latn vec_Latn vie_Latn war_Latn wol_Latn
xho_Latn ydd_Hebr yor_Latn zho_Hans zho_Hant zsm_Latn zul_Latn
```
| [
-0.654977560043335,
-0.5230817198753357,
0.28034672141075134,
0.3351586163043976,
-0.11139348149299622,
0.1178097128868103,
-0.17852187156677246,
-0.4566747546195984,
0.5844066739082336,
0.5660732984542847,
-0.6015192866325378,
-0.6585184335708618,
-0.5382578372955322,
0.2260691076517105,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ContextualAI/trivia_qa_bge_neighbors_nprobe100 | ContextualAI | 2023-10-30T23:50:05Z | 18 | 0 | null | [
"region:us"
] | 2023-10-30T23:50:05Z | 2023-10-30T23:44:22.000Z | 2023-10-30T23:44:22 | ---
dataset_info:
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: question_source
dtype: string
- name: entity_pages
sequence:
- name: doc_source
dtype: string
- name: filename
dtype: string
- name: title
dtype: string
- name: wiki_context
dtype: string
- name: search_results
sequence:
- name: description
dtype: string
- name: filename
dtype: string
- name: rank
dtype: int32
- name: title
dtype: string
- name: url
dtype: string
- name: search_context
dtype: string
- name: answer
struct:
- name: aliases
sequence: string
- name: normalized_aliases
sequence: string
- name: matched_wiki_entity_name
dtype: string
- name: normalized_matched_wiki_entity_name
dtype: string
- name: normalized_value
dtype: string
- name: type
dtype: string
- name: value
dtype: string
- name: neighbor
dtype: string
splits:
- name: validation
num_bytes: 9756868
num_examples: 7993
download_size: 5797345
dataset_size: 9756868
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "trivia_qa_bge_neighbors_nprobe100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6852668523788452,
-0.31307053565979004,
0.47885793447494507,
0.21019460260868073,
-0.015851719304919243,
0.1798693984746933,
0.28758421540260315,
-0.15511368215084076,
0.8599852323532104,
0.41903969645500183,
-0.6142717003822327,
-0.9052714109420776,
-0.41778409481048584,
-0.04368597269... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.