id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ybelkada/common_voice_mr_11_0_copy | ybelkada | 2023-04-04T06:15:41Z | 38 | 0 | null | [
"region:us"
] | 2023-04-04T06:15:41Z | 2023-04-04T06:14:54.000Z | 2023-04-04T06:14:54 | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 81761699.0
num_examples: 2245
- name: validation
num_bytes: 65082681.0
num_examples: 1682
- name: test
num_bytes: 69247449.0
num_examples: 1816
- name: other
num_bytes: 109682091.0
num_examples: 2819
- name: invalidated
num_bytes: 90463060.0
num_examples: 2237
download_size: 407562763
dataset_size: 416236980.0
---
# Dataset Card for "common_voice_mr_11_0_copy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4843166470527649,
-0.12585127353668213,
0.008700890466570854,
0.38620656728744507,
-0.32963576912879944,
0.1170741394162178,
0.2270900011062622,
-0.13496550917625427,
0.9552922248840332,
0.7037422060966492,
-0.7819916009902954,
-0.5190417170524597,
-0.5692359805107117,
-0.06375152617692... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/australian_credit | mstz | 2023-04-15T11:11:01Z | 38 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"australian_credit",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-15T11:11:01Z | 2023-04-07T08:20:44.000Z | 2023-04-07T08:20:44 | ---
language:
- en
tags:
- australian_credit
- tabular_classification
- binary_classification
- UCI
pretty_name: Australian Credit
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- australian_credit
license: cc
---
# Australian Credit
The [Australian Credit](https://archive-beta.ics.uci.edu/dataset/143/statlog+australian+credit+approval) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Classification of loan approval.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| australian_credit | Binary classification | Is the loan granted? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/australian_credit")["train"]
```
# Features
Target feature changes according to the selected configuration and is always in last position in the dataset. | [
-0.34160786867141724,
-0.6506754755973816,
0.3085034489631653,
0.24601268768310547,
-0.11793779581785202,
-0.050366826355457306,
-0.12129048258066177,
-0.1507040113210678,
0.13336056470870972,
0.9419413208961487,
-0.4068983197212219,
-0.7613734602928162,
-0.7490962147712708,
0.365825235843... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CM/codexglue_code2text_java | CM | 2023-04-22T01:51:28Z | 38 | 0 | null | [
"region:us"
] | 2023-04-22T01:51:28Z | 2023-04-22T01:51:09.000Z | 2023-04-22T01:51:09 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 452553835
num_examples: 164923
- name: validation
num_bytes: 13366344
num_examples: 5183
- name: test
num_bytes: 29080753
num_examples: 10955
download_size: 154860056
dataset_size: 495000932
---
# Dataset Card for "codexglue_code2text_java"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4544198513031006,
-0.20682455599308014,
0.12949956953525543,
0.3950098156929016,
-0.13829736411571503,
0.03145679458975792,
-0.08301115036010742,
-0.3318038284778595,
0.589447021484375,
0.6995976567268372,
-0.6182204484939575,
-1.019845962524414,
-0.6936939358711243,
-0.3743863105773926... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
logo-wizard/modern-logo-dataset | logo-wizard | 2023-05-09T13:40:55Z | 38 | 13 | null | [
"task_categories:text-to-image",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-3.0",
"doi:10.57967/hf/0592",
"region:us"
] | 2023-05-09T13:40:55Z | 2023-04-27T20:26:59.000Z | 2023-04-27T20:26:59 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 209598433
num_examples: 803
download_size: 208886058
dataset_size: 209598433
license: cc-by-nc-3.0
task_categories:
- text-to-image
language:
- en
size_categories:
- n<1K
---
# Dataset Card for "logo-dataset-v4"
This dataset consists of 803 pairs \\( (x, y) \\), where \\( x \\) is the image and \\( y \\) is the description of the image.
The data have been manually collected and labelled, so the dataset is fully representative and free of rubbish.
The logos in the dataset are minimalist, meeting modern design requirements and reflecting the company's industry.
# Disclaimer
This dataset is made available for academic research purposes only. All the images are collected from the Internet, and the copyright belongs to the original owners. If any of the images belongs to you and you would like it removed, please inform us, we will try to remove it from the dataset. | [
-0.5171675086021423,
-0.1839597076177597,
0.03546883538365364,
0.09830980747938156,
-0.7522958517074585,
0.018994048237800598,
0.3133339583873749,
-0.7538406848907471,
0.28677186369895935,
0.6526256799697876,
-0.8055880665779114,
-0.6529970765113831,
-0.34611645340919495,
0.372330516576766... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sileod/mindgames | sileod | 2023-11-22T08:24:50Z | 38 | 5 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"theory of mind",
"tom",
"Logical-Reasoning",
"Modal-Logic",
"Reasoning",
"Logics",
"Logic",
"nli",
... | 2023-11-22T08:24:50Z | 2023-05-03T13:56:10.000Z | 2023-05-03T13:56:10 | ---
language:
- en
license: apache-2.0
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
tags:
- theory of mind
- tom
- Logical-Reasoning
- Modal-Logic
- Reasoning
- Logics
- Logic
- nli
- model-checking
- natural language inference
dataset_info:
features:
- name: premise
dtype: string
- name: smcdel_problem
dtype: string
- name: n_announcements
dtype: int64
- name: pbcheck
dtype: string
- name: hypothesis
dtype: string
- name: setup
dtype: string
- name: hypothesis_depth
dtype: int64
- name: n_agents
dtype: int64
- name: label
dtype: string
- name: names
sequence: string
- name: index
dtype: int64
- name: s-l
dtype: string
- name: deberta_pred
dtype: int64
- name: deberta_confidence
dtype: float64
- name: difficulty
dtype: float64
splits:
- name: train
num_bytes: 8702021
num_examples: 11174
- name: validation
num_bytes: 2904084
num_examples: 3725
- name: test
num_bytes: 2909341
num_examples: 3725
download_size: 2989857
dataset_size: 14515446
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
Mindgame dataset
Code:
https://github.com/sileod/llm-theory-of-mind
Article (Accepted at EMNLP 2023 Findings):
https://arxiv.org/abs/2305.03353
```
@article{sileo2023mindgames,
title={MindGames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic},
author={Sileo, Damien and Lernould, Antoine},
journal={arXiv preprint arXiv:2305.03353},
year={2023}
}
``` | [
-0.3416489362716675,
-0.8139836192131042,
0.6950591206550598,
0.11335594952106476,
-0.02644995041191578,
-0.2929803729057312,
-0.45383337140083313,
-0.2622995376586914,
0.3534957766532898,
0.49125754833221436,
-0.8549519181251526,
-0.5085235238075256,
-0.3342001438140869,
0.040481038391590... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nan-Do/instructional_code-search-net-php | Nan-Do | 2023-05-20T05:20:07Z | 38 | 1 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"PHP",
"Code Generation",
"Instruction Response",
"region:us"
] | 2023-05-20T05:20:07Z | 2023-05-19T03:46:01.000Z | 2023-05-19T03:46:01 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 448756286
num_examples: 536632
download_size: 158708948
dataset_size: 448756286
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- PHP
- Code Generation
- Instruction Response
pretty_name: Instructional PHP Dataset
---
# Dataset Card for "instructional_code-search-net-php"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-php
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for PHP.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-php
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0
| [
-0.39912185072898865,
-0.7100676894187927,
0.11612125486135483,
0.4893694818019867,
-0.30412623286247253,
-0.011001174338161945,
-0.3309149146080017,
-0.0963144451379776,
0.5354183316230774,
0.5387317538261414,
-0.775570809841156,
-0.8679890036582947,
-0.3514910042285919,
0.401050180196762... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KaiLv/UDR_Python | KaiLv | 2023-06-21T12:45:54Z | 38 | 0 | null | [
"region:us"
] | 2023-06-21T12:45:54Z | 2023-06-21T12:44:42.000Z | 2023-06-21T12:44:42 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 153748508
num_examples: 250818
- name: validation
num_bytes: 8561595
num_examples: 13841
- name: test
num_bytes: 9299006
num_examples: 14840
- name: debug
num_bytes: 61463442
num_examples: 100000
download_size: 107210496
dataset_size: 233072551
---
# Dataset Card for "UDR_Python"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4476432800292969,
-0.250628799200058,
-0.01736762933433056,
0.259796679019928,
-0.03572472184896469,
-0.0041051460430026054,
0.2696724534034729,
-0.03311026841402054,
0.5242782831192017,
0.3240962028503418,
-0.585151731967926,
-0.5794258117675781,
-0.44527167081832886,
0.033686365932226... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Patt/ReCoRD_TH_drop | Patt | 2023-07-20T15:29:42Z | 38 | 0 | null | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | 2023-07-20T15:29:42Z | 2023-06-22T13:34:05.000Z | 2023-06-22T13:34:05 | ---
task_categories:
- text-classification
language:
- en
- th
dataset_info:
features:
- name: passage
dtype: string
- name: passage_TH
dtype: string
- name: query
dtype: string
- name: query_TH
dtype: string
- name: entities
sequence: string
- name: entities_TH
sequence: string
- name: entity_spans
struct:
- name: end
sequence: int64
- name: start
sequence: int64
- name: text
sequence: string
- name: answers
sequence: string
- name: answers_TH
sequence: string
- name: idx
struct:
- name: passage
dtype: int64
- name: query
dtype: int64
- name: score_passage
dtype: float64
- name: score_query
dtype: float64
- name: score_entities
dtype: float64
- name: score_answers
dtype: float64
splits:
- name: train
num_bytes: 281547282
num_examples: 57811
- name: validation
num_bytes: 32258456
num_examples: 6676
download_size: 112999233
dataset_size: 313805738
---
# Dataset Card for ReCoRD_TH_drop
### Dataset Description
This dataset is Thai translated version of [ReCoRD](https://huggingface.co/datasets/super_glue/viewer/record) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
Drop every row that score_answers < 0.8 and every row that score < 0.5 after penalty. | [
-0.385598361492157,
-0.5212470889091492,
-0.09652137011289597,
0.24860596656799316,
-0.4674331247806549,
-0.10310240089893341,
-0.22596880793571472,
-0.04291174188256264,
0.46037372946739197,
0.5008437037467957,
-0.8566436767578125,
-0.8616835474967957,
-0.5829536318778992,
0.3193673491477... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jxie/coco_captions | jxie | 2023-06-25T07:37:53Z | 38 | 0 | null | [
"region:us"
] | 2023-06-25T07:37:53Z | 2023-06-25T04:37:33.000Z | 2023-06-25T04:37:33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: cocoid
dtype: int32
- name: caption
dtype: string
splits:
- name: train
num_bytes: 90684615607.036
num_examples: 566747
- name: validation
num_bytes: 4562095167.09
num_examples: 25010
- name: test
num_bytes: 4221845598.88
num_examples: 25010
download_size: 20920410197
dataset_size: 99468556373.006
---
# Dataset Card for "coco_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6012386679649353,
-0.21169985830783844,
0.08407805114984512,
0.5489176511764526,
-0.3983136713504791,
0.37625405192375183,
0.05726844444870949,
-0.2174936830997467,
0.8294156193733215,
0.6550350785255432,
-0.7781612277030945,
-0.7822273969650269,
-0.6360443234443665,
-0.0633652359247207... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
elsaEU/ELSA1M_track1 | elsaEU | 2023-08-27T08:01:57Z | 38 | 2 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-08-27T08:01:57Z | 2023-07-18T16:50:36.000Z | 2023-07-18T16:50:36 | ---
elsaEU--ELSA1M_track1:
description: ''
citation: ''
homepage: ''
license: ''
features:
image:
decode: true
id: null
dtype: Image
id:
dtype: string
id: null
_type: Value
original_prompt:
dtype: string
id: null
_type: Value
positive_prompt:
dtype: string
id: null
_type: Value
negative_prompt:
dtype: string
id: null
_type: Value
model:
dtype: string
id: null
_type: Value
nsfw:
dtype: string
id: null
_type: Value
url_real_image:
dtype: string
id: null
_type: Value
filepath:
dtype: string
id: null
_type: Value
aspect_ratio:
feature:
dtype: int64
id: null
_type: Value
length: -1
id: null
_type: Sequence
post_processed: null
supervised_keys: null
task_templates: null
builder_name: imagefolder
config_name: default
version:
version_str: 0.0.0
description: null
major: 0
minor: 0
patch: 0
splits:
train:
name: train
num_bytes: 445926712527.43
num_examples: 992655
dataset_name: ELSA1M_track1
download_checksums: null
download_size: 223034360161
post_processing_size: null
dataset_size: 445926712527.43
size_in_bytes: 668961072688.4299
license: cc-by-4.0
---
# ELSA - Multimedia use case

**ELSA Multimedia is a large collection of Deep Fake images, generated using diffusion models**
### Dataset Summary
This dataset was developed as part of the EU project ELSA. Specifically for the Multimedia use-case.
Official webpage: https://benchmarks.elsa-ai.eu/
This dataset aims to develop effective solutions for detecting and mitigating the spread of deep fake images in multimedia content. Deep fake images, which are highly realistic and deceptive manipulations, pose significant risks to privacy, security, and trust in digital media. This dataset can be used to train robust and accurate models that can identify and flag instances of deep fake images.
### ELSA versions
| Name | Description | Link |
| ------------- | ------------- | ---------------------|
| ELSA1M_track1 | Dataset of 1M images generated using diffusion model | https://huggingface.co/datasets/elsaEU/ELSA1M_track1 |
| ELSA500k_track2 | Dataset of 500k images generated using diffusion model with diffusion attentive attribution maps [1] | https://huggingface.co/datasets/elsaEU/ELSA500k_track2 |
```python
from datasets import load_dataset
elsa_data = load_dataset("elsaEU/ELSA1M_track1", split="train", streaming=True)
for sample in elsa_data:
image = sample.pop("image")
metadata = sample
```
Using <a href="https://huggingface.co/docs/datasets/stream">streaming=True</a> lets you work with the dataset without downloading it.
## Dataset Structure
Each parquet file contains nearly 1k images and a JSON file with metadata.
The Metadata for generated images are:
- ID: Laion image ID
- original_prompt: Laion Prompt
- positive_prompt: positive prompt used for image generation
- negative_prompt: negative prompt used for image generation
- model: model used for the image generation
- nsfw: nsfw tag from Laion
- url_real_image: Url of the real image associated to the same prompt
- filepath: filepath of the fake image
- aspect_ratio: aspect ratio of the generated image
### Dataset Curators
- Leonardo Labs (rosario.dicarlo.ext@leonardo.com)
- UNIMORE (https://aimagelab.ing.unimore.it/imagelab/) | [
-0.5213956236839294,
-0.6705175042152405,
0.322078138589859,
0.16490021347999573,
-0.46984753012657166,
-0.1336149275302887,
0.33604586124420166,
-0.35151827335357666,
0.24308627843856812,
0.46903330087661743,
-0.6101673245429993,
-0.6093069314956665,
-0.5856621265411377,
-0.12000501155853... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adityarra07/sub_ATC_test | adityarra07 | 2023-08-09T17:25:54Z | 38 | 0 | null | [
"region:us"
] | 2023-08-09T17:25:54Z | 2023-08-09T17:25:49.000Z | 2023-08-09T17:25:49 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 130645075.80770035
num_examples: 1000
download_size: 120802206
dataset_size: 130645075.80770035
---
# Dataset Card for "sub_ATC_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7246462106704712,
-0.35922950506210327,
0.08069309592247009,
-0.025592315942049026,
-0.3440901041030884,
0.1991100162267685,
0.45630815625190735,
-0.06949612498283386,
0.8797399997711182,
0.21737389266490936,
-0.8425636291503906,
-0.8096587061882019,
-0.45072972774505615,
-0.21246285736... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PL-MTEB/sicke-pl-pairclassification | PL-MTEB | 2023-08-11T10:49:18Z | 38 | 0 | null | [
"license:cc-by-nc-sa-3.0",
"region:us"
] | 2023-08-11T10:49:18Z | 2023-08-11T09:30:13.000Z | 2023-08-11T09:30:13 | ---
license: cc-by-nc-sa-3.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PL-MTEB/psc-pairclassification | PL-MTEB | 2023-08-11T13:08:44Z | 38 | 0 | null | [
"license:cc-by-sa-3.0",
"region:us"
] | 2023-08-11T13:08:44Z | 2023-08-11T13:05:48.000Z | 2023-08-11T13:05:48 | ---
license: cc-by-sa-3.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Vipitis/Shadertoys-FunctionGeneration-dev | Vipitis | 2023-11-26T17:06:06Z | 38 | 2 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:code",
"license:cc-by-nc-sa-3.0",
"shader",
"region:us"
] | 2023-11-26T17:06:06Z | 2023-08-14T21:41:28.000Z | 2023-08-14T21:41:28 | ---
language:
- en
- code
license: cc-by-nc-sa-3.0
size_categories:
- 10K<n<100K
task_categories:
- text-generation
pretty_name: 'Shadertoys: shadereval task 2: Function Generation (development build)'
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: full_code
dtype: string
- name: func_range
sequence: int64
- name: model_ctx
dtype: string
- name: author
dtype: string
- name: source
dtype: string
- name: func_id
dtype: int64
splits:
- name: train
num_bytes: 37008180
num_examples: 5913
- name: test
num_bytes: 8735087
num_examples: 1152
download_size: 6460799
dataset_size: 45743267
tags:
- shader
---
# Dataset Card for "Shadertoys-FunctionGeneration-dev"
this is a filtered and preprocessed dataset of the [Shadertoys](https://huggingface.co/datasets/Vipitis/Shadertoys) (0.0.2 revision) dataset. Please take all information from the dataset card there.
The evaluation task is being develop here: https://github.com/Vipitis/bigcode-evaluation-harness/tree/shadereval_task2 (somewhat public) | [
-0.39584580063819885,
-0.24576103687286377,
-0.01591968722641468,
0.5680269002914429,
-0.432848185300827,
0.14661279320716858,
-0.045488495379686356,
-0.5257601141929626,
0.508037805557251,
0.7765620350837708,
-1.0679341554641724,
-0.5645191073417664,
-0.30976226925849915,
0.02761885337531... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sehaj/robot_commands_2 | Sehaj | 2023-09-20T10:26:51Z | 38 | 2 | null | [
"license:mit",
"region:us"
] | 2023-09-20T10:26:51Z | 2023-09-20T10:26:31.000Z | 2023-09-20T10:26:31 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
euclaise/writingprompts | euclaise | 2023-09-21T19:12:16Z | 38 | 1 | null | [
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"arxiv:1805.04833",
"region:us"
] | 2023-09-21T19:12:16Z | 2023-09-21T18:53:34.000Z | 2023-09-21T18:53:34 | ---
language:
- en
license: mit
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: prompt
dtype: string
- name: story
dtype: string
splits:
- name: train
num_bytes: 858816216
num_examples: 272600
- name: test
num_bytes: 47681276
num_examples: 15138
- name: validation
num_bytes: 48904993
num_examples: 15620
download_size: 605049830
dataset_size: 955402485
---
# Dataset Card for "writingprompts"
WritingPrompts dataset, as used in [Hierarchical Neural Story Generation](https://arxiv.org/pdf/1805.04833.pdf). Parsed from [the archive](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz) | [
-0.22476810216903687,
-0.3249369263648987,
0.11222594231367111,
0.31850796937942505,
-0.20106826722621918,
-0.2328554093837738,
0.1445913016796112,
-0.05041472986340523,
0.29761433601379395,
0.5149792432785034,
-1.1393243074417114,
-0.8672369718551636,
-0.39326903223991394,
0.0159883424639... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
manishiitg/aditi-gpt4-v2 | manishiitg | 2023-11-15T04:10:56Z | 38 | 0 | null | [
"region:us"
] | 2023-11-15T04:10:56Z | 2023-09-25T05:38:51.000Z | 2023-09-25T05:38:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: response
dtype: string
- name: session_id
dtype: string
- name: chat_id
dtype: string
- name: text
dtype: string
- name: text_test
dtype: string
- name: english
dtype: string
- name: type
dtype: string
- name: output
dtype: string
- name: orca_text
dtype: string
- name: orca_text_test
dtype: string
splits:
- name: train
num_bytes: 114194044.40896891
num_examples: 17705
- name: test
num_bytes: 1154517.5910310892
num_examples: 179
download_size: 19308677
dataset_size: 115348562.0
---
# Dataset Card for "aditi-gpt4-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4844779670238495,
-0.024867238476872444,
0.30810439586639404,
0.1757701188325882,
-0.32643017172813416,
-0.10630673915147781,
0.5277760624885559,
-0.41619330644607544,
0.48871952295303345,
0.13814282417297363,
-0.7145667672157288,
-0.5282313823699951,
-0.5143226981163025,
-0.16806900501... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ckiplab/traditional-chinese-dolly-15k | ckiplab | 2023-10-05T16:16:41Z | 38 | 5 | null | [
"license:cc-by-sa-3.0",
"region:us"
] | 2023-10-05T16:16:41Z | 2023-10-05T16:01:01.000Z | 2023-10-05T16:01:01 | ---
license: cc-by-sa-3.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
1aurent/Kather-texture-2016 | 1aurent | 2023-10-15T12:10:50Z | 38 | 0 | null | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"biology",
"Colorectal Pancer",
"Histopathology",
"Histology",
"Digital Pathology",
"region:us"
] | 2023-10-15T12:10:50Z | 2023-10-15T11:53:57.000Z | 2023-10-15T11:53:57 | ---
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- image-classification
tags:
- biology
- Colorectal Pancer
- Histopathology
- Histology
- Digital Pathology
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': ADIPOSE
'1': COMPLEX
'2': DEBRIS
'3': EMPTY
'4': LYMPHO
'5': MUCOSA
'6': STROMA
'7': TUMOR
splits:
- name: train
num_bytes: 329215083
num_examples: 5000
download_size: 293441024
dataset_size: 329215083
---
[](https://doi.org/10.5281/zenodo.53169)
# Collection of textures in colorectal cancer histology
**Homepage**: https://zenodo.org/records/53169 \
**Publication Date**: 2016-05-26 \
**License**: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) \
**Citation**:
```bibtex
@dataset{kather_2016_53169,
author = {Kather, Jakob Nikolas and Zöllner, Frank Gerrit and Bianconi, Francesco and Melchers, Susanne M and Schad, Lothar R and Gaiser, Timo and Marx, Alexander and Weis, Cleo-Aron},
title = {{Collection of textures in colorectal cancer histology}},
month = jun,
year = 2016,
publisher = {Zenodo}
}
```
## Description
This data set represents a collection of textures in histological images of human colorectal cancer.
It contains 5000 histological images of 150 * 150 px each (74 * 74 µm). Each image belongs to exactly one of eight tissue categories.
## Image format
All images are RGB, 0.495 µm per pixel, digitized with an Aperio ScanScope (Aperio/Leica biosystems), magnification 20x.
Histological samples are fully anonymized images of formalin-fixed paraffin-embedded human colorectal adenocarcinomas (primary tumors) from our pathology archive
(Institute of Pathology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany).
## Ethics statement
All experiments were approved by the institutional ethics board (medical ethics board II, University Medical Center Mannheim, Heidelberg University, Germany; approval 2015-868R-MA).
The institutional ethics board waived the need for informed consent for this retrospective analysis of anonymized samples.
All experiments were carried out in accordance with the approved guidelines and with the Declaration of Helsinki.
## More information / data usage
For more information, please refer to the following article. Please cite this article when using the data set. \
[Kather JN, Weis CA, Bianconi F, Melchers SM, Schad LR, Gaiser T, Marx A, Zollner F:
Multi-class texture analysis in colorectal cancer histology (2016), Scientific Reports (in press)](https://doi.org/10.1038/srep27988)
## Contact
For questions, please contact: \
Dr. Jakob Nikolas Kather \
http://orcid.org/0000-0002-3730-5348 \
ResearcherID: D-4279-2015 | [
-0.3970184922218323,
-0.14943283796310425,
0.5162790417671204,
0.45018815994262695,
-0.5942280888557434,
0.07829980552196503,
-0.11816219240427017,
-0.4503118395805359,
0.6577380895614624,
0.5270276069641113,
-0.4653663635253906,
-0.7614201903343201,
-0.30062100291252136,
0.373803824186325... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kjappelbaum/chemnlp-chemdner | kjappelbaum | 2023-10-27T19:07:02Z | 38 | 0 | null | [
"region:us"
] | 2023-10-27T19:07:02Z | 2023-10-27T18:39:06.000Z | 2023-10-27T18:39:06 | ---
dataset_info:
features:
- name: entities
sequence: string
- name: text
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 14376666
num_examples: 19440
download_size: 8033115
dataset_size: 14376666
---
# Dataset Card for "chemnlp-chemdner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49569961428642273,
0.015969350934028625,
0.3424524962902069,
0.04107438772916794,
-0.06243239343166351,
0.19503439962863922,
0.3142778277397156,
-0.0767972394824028,
0.7316141128540039,
0.3744686245918274,
-0.9549504518508911,
-0.7359097003936768,
-0.5306861400604248,
-0.191859349608421... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
royzhong/ASVS5 | royzhong | 2023-11-28T23:27:38Z | 38 | 0 | null | [
"region:us"
] | 2023-11-28T23:27:38Z | 2023-10-28T17:21:28.000Z | 2023-10-28T17:21:28 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AriaK99/CalChat | AriaK99 | 2023-11-05T22:20:23Z | 38 | 0 | null | [
"region:us"
] | 2023-11-05T22:20:23Z | 2023-10-30T20:21:14.000Z | 2023-10-30T20:21:14 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arminmrm93/usda_recipes_with_embed | arminmrm93 | 2023-11-07T22:39:51Z | 38 | 0 | null | [
"region:us"
] | 2023-11-07T22:39:51Z | 2023-10-31T03:15:13.000Z | 2023-10-31T03:15:13 | ---
dataset_info:
features:
- name: name
dtype: string
- name: cookTime
dtype: string
- name: prepTime
dtype: string
- name: recipeIngredient
dtype: string
- name: description
dtype: string
- name: totalTime
dtype: string
- name: recipeInstructions
dtype: string
- name: recipeYield
dtype: string
- name: id
dtype: int64
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 445536
num_examples: 114
download_size: 592730
dataset_size: 445536
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "usda_recipes_with_embed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.455886572599411,
-0.601577639579773,
0.5544395446777344,
0.3069257438182831,
-0.06866643577814102,
0.018654631450772285,
0.2485281080007553,
0.13112834095954895,
1.0164543390274048,
0.6382169723510742,
-0.5948371887207031,
-1.0871717929840088,
-0.7453969717025757,
-0.2001645416021347,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mallam-ai/marx-engels | mallam-ai | 2023-10-31T06:26:43Z | 38 | 0 | null | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:pddl",
"region:us"
] | 2023-10-31T06:26:43Z | 2023-10-31T05:57:30.000Z | 2023-10-31T05:57:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: content
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 20866538
num_examples: 1297
download_size: 11056454
dataset_size: 20866538
license: pddl
task_categories:
- text-generation
language:
- en
pretty_name: Marx and Engels Internet Archive
size_categories:
- 1K<n<10K
---
# Dataset Card for "marx-engels"
This dataset was generated by scraping https://www.marxists.org/archive/marx/index.htm
## Licensing Information
According to **marxists.org**, unless otherwise noted, texts in the archive are in the public domain.
See https://www.marxists.org/admin/janitor/faq.htm for further information. | [
-0.09431027621030807,
-0.04008224979043007,
0.20270423591136932,
0.08502884954214096,
-0.24631258845329285,
0.01626439392566681,
0.05767858400940895,
-0.3116293251514435,
0.3357361853122711,
0.6047295928001404,
-0.4027119576931,
-0.6980324387550354,
-0.516305685043335,
0.4002539813518524,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eltoai/test-hello | eltoai | 2023-10-31T18:34:15Z | 38 | 0 | null | [
"region:us"
] | 2023-10-31T18:34:15Z | 2023-10-31T18:34:15.000Z | 2023-10-31T18:34:15 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: data
sequence: string
splits:
- name: train
num_bytes: 45780
num_examples: 1000
download_size: 19559
dataset_size: 45780
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test-hello"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8372974395751953,
-0.4334591031074524,
0.06570328772068024,
0.0355716310441494,
-0.12626594305038452,
-0.004006563685834408,
0.25500616431236267,
-0.24726559221744537,
0.8322526216506958,
0.30099156498908997,
-0.8533159494400024,
-0.7538173794746399,
-0.49704620242118835,
-0.10092487186... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Zaid/CGSQuAD | Zaid | 2023-11-12T08:25:27Z | 38 | 0 | null | [
"region:us"
] | 2023-11-12T08:25:27Z | 2023-11-02T06:42:09.000Z | 2023-11-02T06:42:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_start
dtype: int64
- name: is_impossible
dtype: bool
- name: count
dtype: int64
- name: id
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 14944089
num_examples: 1504
download_size: 106212
dataset_size: 14944089
---
# Dataset Card for "CGSQuAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6046905517578125,
-0.4473552405834198,
0.1857718527317047,
0.22664254903793335,
-0.3362120985984802,
0.05698448047041893,
0.15367721021175385,
-0.21041998267173767,
0.7406470775604248,
0.3484475612640381,
-0.7127371430397034,
-0.8411771655082703,
-0.6593047380447388,
-0.4787023365497589... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/en_hi_3 | ShrinivasSK | 2023-11-04T19:22:51Z | 38 | 0 | null | [
"region:us"
] | 2023-11-04T19:22:51Z | 2023-11-04T19:22:44.000Z | 2023-11-04T19:22:44 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 6423499.8
num_examples: 18000
- name: test
num_bytes: 713722.2
num_examples: 2000
download_size: 3835153
dataset_size: 7137222.0
---
# Dataset Card for "data_hi_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6185407042503357,
-0.3994255065917969,
0.27780237793922424,
0.29196348786354065,
-0.08800768852233887,
-0.20539404451847076,
0.5121764540672302,
-0.3412463963031769,
0.8248218297958374,
0.6638441681861877,
-0.640839159488678,
-0.8281952738761902,
-0.4951883554458618,
-0.1366640776395797... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/en_te_3 | ShrinivasSK | 2023-11-04T19:23:37Z | 38 | 0 | null | [
"region:us"
] | 2023-11-04T19:23:37Z | 2023-11-04T19:23:31.000Z | 2023-11-04T19:23:31 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 4131310.5
num_examples: 18000
- name: test
num_bytes: 459034.5
num_examples: 2000
download_size: 2465569
dataset_size: 4590345.0
---
# Dataset Card for "data_te_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5263786315917969,
-0.33753764629364014,
0.4007604718208313,
0.22459611296653748,
-0.17959976196289062,
-0.10366006195545197,
0.42685186862945557,
-0.2452317774295807,
0.7685674428939819,
0.41218531131744385,
-0.7154785394668579,
-0.8582401275634766,
-0.5017656087875366,
-0.1291190683841... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
St4n/self_dataset | St4n | 2023-11-05T14:28:35Z | 38 | 0 | null | [
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-11-05T14:28:35Z | 2023-11-05T12:38:21.000Z | 2023-11-05T12:38:21 | ---
language:
- en
size_categories:
- 1K<n<10K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/kn_en_1 | ShrinivasSK | 2023-11-06T19:07:26Z | 38 | 0 | null | [
"region:us"
] | 2023-11-06T19:07:26Z | 2023-11-06T18:54:38.000Z | 2023-11-06T18:54:38 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: tgt
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 3986574.3
num_examples: 18000
- name: test
num_bytes: 442952.7
num_examples: 2000
download_size: 2373508
dataset_size: 4429527.0
---
# Dataset Card for "kn_en_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7271230220794678,
-0.2471347600221634,
0.18407806754112244,
0.2921449542045593,
-0.5085474848747253,
-0.14229078590869904,
0.2661890387535095,
-0.17074310779571533,
0.9776667356491089,
0.5533410310745239,
-0.8853176236152649,
-0.947918176651001,
-0.7015637755393982,
-0.11655360460281372... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ShrinivasSK/mr_en_1 | ShrinivasSK | 2023-11-06T19:07:57Z | 38 | 0 | null | [
"region:us"
] | 2023-11-06T19:07:57Z | 2023-11-06T18:54:52.000Z | 2023-11-06T18:54:52 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: tgt
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 4586634.0
num_examples: 18000
- name: test
num_bytes: 509626.0
num_examples: 2000
download_size: 2687176
dataset_size: 5096260.0
---
# Dataset Card for "mr_en_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7680443525314331,
-0.20767046511173248,
0.13771258294582367,
0.21294769644737244,
-0.26297205686569214,
-0.14393456280231476,
0.25863805413246155,
-0.13468176126480103,
1.114022135734558,
0.4911816418170929,
-1.0831407308578491,
-0.7669177651405334,
-0.6483225226402283,
-0.0167068559676... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vinhtran2611/zaloai-2023-nlp-train-vi | vinhtran2611 | 2023-11-11T01:39:57Z | 38 | 0 | null | [
"region:us"
] | 2023-11-11T01:39:57Z | 2023-11-10T04:10:05.000Z | 2023-11-10T04:10:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: choices
sequence: string
- name: explanation
dtype: string
- name: question
dtype: string
- name: id
dtype: string
- name: answer
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 248715
num_examples: 960
- name: test
num_bytes: 30160
num_examples: 120
- name: dev
num_bytes: 30996
num_examples: 120
download_size: 175194
dataset_size: 309871
---
# Dataset Card for "zaloai-2023-nlp-train-vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.494936466217041,
0.003931038547307253,
0.05586997792124748,
0.3587404489517212,
-0.17165012657642365,
-0.19271227717399597,
0.08680924028158188,
-0.23675166070461273,
0.8140835165977478,
0.3983162045478821,
-0.9743221998214722,
-0.5308999419212341,
-0.49279528856277466,
-0.0845921710133... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HossainRabby/DATA | HossainRabby | 2023-11-27T14:18:33Z | 38 | 0 | null | [
"region:us"
] | 2023-11-27T14:18:33Z | 2023-11-11T15:25:46.000Z | 2023-11-11T15:25:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 379777.2888616891
num_examples: 735
- name: test
num_bytes: 42369.71113831089
num_examples: 82
download_size: 165978
dataset_size: 422147.0
---
# Dataset Card for "DATA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6054473519325256,
-0.3282342851161957,
0.24143677949905396,
0.18057431280612946,
-0.20366448163986206,
0.09827492386102676,
0.28238993883132935,
-0.20685431361198425,
0.922287106513977,
0.5007418990135193,
-0.8535171747207642,
-0.796027421951294,
-0.6018335819244385,
-0.2738567590713501... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/dumb_whole | jlbaker361 | 2023-11-17T05:53:47Z | 38 | 0 | null | [
"region:us"
] | 2023-11-17T05:53:47Z | 2023-11-15T03:13:54.000Z | 2023-11-15T03:13:54 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 225.0
num_examples: 9
- name: test
num_bytes: 25
num_examples: 1
download_size: 3294
dataset_size: 250.0
---
# Dataset Card for "dumb_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5035151243209839,
-0.5042429566383362,
0.2687894403934479,
0.3289090394973755,
-0.4381031095981598,
-0.3521050214767456,
0.22075656056404114,
-0.11067497730255127,
1.1701374053955078,
0.5386151075363159,
-0.7646575570106506,
-0.6080597639083862,
-0.4611579179763794,
-0.2639580965042114,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
flozi00/no_robots_german | flozi00 | 2023-11-21T08:13:15Z | 38 | 0 | null | [
"region:us"
] | 2023-11-21T08:13:15Z | 2023-11-15T12:41:28.000Z | 2023-11-15T12:41:28 | ---
dataset_info:
features:
- name: messages
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 14328669
num_examples: 9499
download_size: 8502633
dataset_size: 14328669
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "no_robots_german"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7335729002952576,
-0.38235387206077576,
0.35952892899513245,
0.09379055351018906,
-0.18218271434307098,
-0.07665996998548508,
0.26803725957870483,
-0.06075786426663399,
0.950610339641571,
0.6176299452781677,
-1.1432205438613892,
-1.0689274072647095,
-0.6623497605323792,
0.02507763542234... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CrossDimitri/CrossDimitriSolutionsText | CrossDimitri | 2023-11-23T21:49:08Z | 38 | 0 | null | [
"license:llama2",
"region:us"
] | 2023-11-23T21:49:08Z | 2023-11-15T16:07:39.000Z | 2023-11-15T16:07:39 | ---
license: llama2
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigheiniuJ/BBH_eval | bigheiniuJ | 2023-11-15T20:55:24Z | 38 | 0 | null | [
"region:us"
] | 2023-11-15T20:55:24Z | 2023-11-15T20:53:39.000Z | 2023-11-15T20:53:39 | ---
dataset_info:
features:
- name: input
dtype: string
- name: target
dtype: string
- name: task
dtype: string
- name: options
sequence: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2641563
num_examples: 4071
download_size: 570189
dataset_size: 2641563
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "BBH_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.590492844581604,
-0.7473705410957336,
0.1623162180185318,
0.17836374044418335,
-0.23021666705608368,
0.2110847681760788,
0.24475248157978058,
-0.0865815281867981,
0.7084651589393616,
0.5793101191520691,
-0.6902847290039062,
-0.9167251586914062,
-0.3500702381134033,
-0.17082975804805756,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_bob_hard_4_mixture_1.0e | atmallen | 2023-11-16T18:18:42Z | 38 | 0 | null | [
"region:us"
] | 2023-11-16T18:18:42Z | 2023-11-16T03:34:20.000Z | 2023-11-16T03:34:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 4578170.5
num_examples: 37091
- name: validation
num_bytes: 487083.5
num_examples: 3969
- name: test
num_bytes: 477119.5
num_examples: 3926
download_size: 1539574
dataset_size: 5542373.5
---
# Dataset Card for "qm_bob_hard_4_mixture_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5625410079956055,
-0.2272884100675583,
0.27431657910346985,
0.49440568685531616,
-0.3655996322631836,
0.2915012538433075,
0.3383008539676666,
0.13300731778144836,
0.8314395546913147,
0.7046531438827515,
-0.6991574764251709,
-0.9297526478767395,
-0.44538962841033936,
-0.1731744408607483,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/small_division_decimal | jlbaker361 | 2023-11-17T05:53:57Z | 38 | 0 | null | [
"region:us"
] | 2023-11-17T05:53:57Z | 2023-11-17T04:47:43.000Z | 2023-11-17T04:47:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1589.3333333333333
num_examples: 32
- name: test
num_bytes: 198.66666666666666
num_examples: 4
download_size: 4415
dataset_size: 1788.0
---
# Dataset Card for "small_division_decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6379950046539307,
-0.3564046323299408,
0.18104702234268188,
0.2585774064064026,
-0.35493722558021545,
-0.2464466542005539,
-0.0067114802077412605,
-0.02571423538029194,
0.7993578314781189,
0.22130411863327026,
-0.7820841670036316,
-0.6843439936637878,
-0.565864086151123,
-0.200953945517... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
suncy13/wikiLongQA01 | suncy13 | 2023-11-17T09:51:12Z | 38 | 0 | null | [
"region:us"
] | 2023-11-17T09:51:12Z | 2023-11-17T09:50:51.000Z | 2023-11-17T09:50:51 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sinonimayzer/mixed-data | sinonimayzer | 2023-11-25T22:33:47Z | 38 | 0 | null | [
"task_categories:fill-mask",
"language:uz",
"region:us"
] | 2023-11-25T22:33:47Z | 2023-11-21T21:09:45.000Z | 2023-11-21T21:09:45 | ---
task_categories:
- fill-mask
language:
- uz
---
Credit goes to Tahrirchi, a chief contributor of our mixed-dataset (https://huggingface.co/datasets/tahrirchi/uz-books) | [
-0.3491306006908417,
0.22499319911003113,
0.0834275558590889,
0.10501021146774292,
-0.255565345287323,
-0.05132430046796799,
0.08554735034704208,
-0.5958312749862671,
0.3856639564037323,
0.6851587891578674,
-0.8618829250335693,
-0.6897134184837341,
-0.14447921514511108,
0.03190420567989349... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fformosa/composability_feliu | fformosa | 2023-11-22T22:21:12Z | 38 | 0 | null | [
"region:us"
] | 2023-11-22T22:21:12Z | 2023-11-22T22:21:09.000Z | 2023-11-22T22:21:09 | ---
dataset_info:
features:
- name: url
dtype: string
- name: caption
dtype: string
- name: id
dtype: int64
- name: similarity
dtype: string
- name: image
dtype: image
- name: count_id
dtype: int64
- name: type
dtype: string
- name: indices
sequence: int64
- name: dataset_id
dtype: int64
splits:
- name: train
num_bytes: 2541387.0
num_examples: 40
download_size: 2494356
dataset_size: 2541387.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "composability_feliu"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6542294025421143,
-0.2619051933288574,
0.13860155642032623,
0.5739256143569946,
-0.02578180842101574,
-0.11490950733423233,
0.2839907705783844,
-0.45447838306427,
0.8745282292366028,
0.4303589165210724,
-0.8606419563293457,
-0.5406416654586792,
-0.5312721729278564,
-0.09358824044466019,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kimata/gpt_driver_dataset_processed | Kimata | 2023-11-24T15:27:06Z | 38 | 0 | null | [
"region:us"
] | 2023-11-24T15:27:06Z | 2023-11-24T15:26:49.000Z | 2023-11-24T15:26:49 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 66417387
num_examples: 70164
- name: test
num_bytes: 14265987
num_examples: 15357
download_size: 7590882
dataset_size: 80683374
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
semeru/completeformer-masked | semeru | 2023-02-05T01:49:32Z | 37 | 1 | null | [
"region:us"
] | 2023-02-05T01:49:32Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion2B-multi | laion | 2023-05-24T22:53:57Z | 37 | 33 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-05-24T22:53:57Z | 2022-03-09T02:48:39.000Z | 2022-03-09T02:48:39 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sayalaruano/FakeNewsCorpusSpanish | sayalaruano | 2022-03-22T14:37:06Z | 37 | 1 | null | [
"region:us"
] | 2022-03-22T14:37:06Z | 2022-03-22T14:20:00.000Z | 2022-03-22T14:20:00 | # :newspaper: The Spanish Fake News Corpus




## The Spanish Fake News Corpus Version 2.0 [[ FakeDeS Task @ Iberlef 2021 ]] :metal:
### Corpus Description
The Spanish Fake News Corpus Version 2.0 contains pairs of fake and true publications about different events (all of them were written in Spanish) that were collected from **November 2020 to March 2021**. Different sources from the web were used to gather the information, but mainly of two types: 1) newspapers and media companies websites, and 2) fact-cheking websites. Most of the revised fact-checking sites used follow the recommendations of the International [Fact-Checking Network (IFCN)](https://ifcncodeofprinciples.poynter.org/) that seeks to promote good practice in fact-checking.
The assembled corpus has **572 instances** and the instances were labeled using two classes, true or fake. The test corpus is balanced with respect to these two classes. To compile the true-fake news pair of the test corpus, the following guidelines were followed:
- A fake news is added to the corpus if any of the selected fact-checking sites determines it.
- Given a fake news, its true news counterpart is added if there is evidence that it has been published in a reliable site (established newspaper site or media site).
The topics covered in the corpus are: **Science, Sport, Politics, Society, COVID-19, Environment, and International**.The corpus includes mostly news articles, however, on this occasion social media posts were also included in the category of fake news. Exactly 90 posts were included as fake news (15.73\% of the total). This posts were recovered mainly from Facebook and WhatsApp. The use of the various fact-checking sites involved consulting pages from different countries that offer content in Spanish in addition to Mexico, so different variants of Spanish are included in the test corpus. These sites included countries like Argentina, Bolivia, Chile, Colombia, Costa Rica, Ecuador, Spain, United States, France, Peru, Uruguay, England and Venezuela.
The corpus is concentrated in the file test.xlsx. The meaning of the columns is described next:
<ul>
<li><b>Id</b>: assign an identifier to each instance.</li>
<li><b>Category</b>: indicates the category of the news (true or fake).</li>
<li><b>Topic</b>: indicates the topic related to the news.</li>
<li><b>Source</b>: indicates the name of the source.</li>
<li><b>Headline</b>: contains the headline of the news.</li>
<li><b>Text</b>: contains the raw text of the news.</li>
<li><b>Link</b>: contains the URL of the source.</li>
</ul>
Note that some instances have an empty header intentionally because the source omitted it.
### :pencil: How to cite
If you use the corpus please cite the following articles:
1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.
2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.
3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.
### FakeDeS @ IberLef 2021
>> The corpus was used for the **Fake News Detection in Spanish (FakeDeS)** shared task at the IberLEF 2021 congress. The details of the competition can be viewed in the main page of the [competition](https://sites.google.com/view/fakedes).
### Organizers
- Helena Montserrat Gómez Adorno (IIMAS - UNAM)
- Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN)
- Gemma Bel Enguix (IINGEN - UNAM)
- Claudia Porto Capetillo (IIMAS - UNAM)
## :books: The Spanish Fake News Corpus Version 1.0 (@ MEXLEF 20)
### :page_facing_up: Corpus Description
<p style='text-align: justify;'>
The Spanish Fake News Corpus contains a collection of news compiled from several resources on the Web: established newspapers websites, media companies’ websites, special websites dedicated to validating fake news and websites designated by different journalists as sites that regularly publish fake news. The news were collected from **January to July of 2018** and all of them were written in Spanish. The process of tagging the corpus was manually performed and the method followed is described in the paper.
aspects were considered: 1) news were tagged as true if there was evidence that it has been published in reliable sites, i.e., established newspaper websites or renowned journalists websites; 2) news were tagged as fake if there were news from reliable sites or specialized website in detection of deceptive content for example VerificadoMX (https://verificado.mx) that contradicts it or no other evidence was found about the news besides the source; 3) the correlation between the news was kept by collecting the true-fake news pair of an event; 4) we tried to trace the source of the news.
</p>
The corpus contains 971 news divided into 491 real news and 480 fake news. The corpus covers news from 9 different topics: **Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society**. The corpus was split into train and test sets, using around the 70\% of the corpus for train and the rest for test. We performed a hierarchical distribution of the corpus, i.e., all the categories keep the 70\%-30\% ratio.
The corpus is concentrated in the files train.xlsx and development.xlsx. The meaning of the columns is described next:
<ul>
<li><b>Id</b>: assign an identifier to each instance.</li>
<li><b>Category</b>: indicates the category of the news (true or fake).</li>
<li><b>Topic</b>: indicates the topic related to the news.</li>
<li><b>Source</b>: indicates the name of the source.</li>
<li><b>Headline</b>: contains the headline of the news.</li>
<li><b>Text</b>: contains the raw text of the news.</li>
<li><b>Link</b>: contains the URL of the source.</li>
</ul>
### :pencil: How to cite
If you use the corpus please cite the following articles:
1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.
2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.
3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.
### Fake News Detection Task at MEX-A3T
>> The Fake News Corpus in Spanish was used for the **Fake News Detection Task** in the **MEX-A3T** competition at the IberLEF 2020 congress. The details of the competition can be viewed in the main page of the [competition](https://sites.google.com/view/mex-a3t/).
### Authors of the corpus
Juan Manuel Ramírez Cruz (ESIME Zacatenco - IPN), Silvia Úrsula Palacios Alvarado (ESIME Zacatenco - IPN), Karime Elena Franca Tapia (ESIME Zacatenco - IPN), Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN), Helena Montserrat Gómez Adorno (IIMAS - UNAM), Grigori Sidorov (CIC - IPN)
### Aknowledgments
The work was done with partial support of Red Temática de Tecnologías del Lenguaje, CONACYT project 240844 and SIP-IPN projects 20181849 and 20171813
## License
[CC-BY-4.0](https://choosealicense.com/licenses/cc-by-4.0/).
| [
-0.35755202174186707,
-0.9090917110443115,
0.5305516719818115,
0.7663193941116333,
-0.08723824471235275,
0.37226602435112,
-0.014133586548268795,
-0.50393146276474,
0.5575906038284302,
0.17573508620262146,
-0.2020830363035202,
-0.6897109746932983,
-0.5594426989555359,
0.27372002601623535,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
h4iku/coconut_java2006 | h4iku | 2023-09-28T22:53:23Z | 37 | 0 | null | [
"code",
"region:us"
] | 2023-09-28T22:53:23Z | 2022-03-29T23:30:34.000Z | 2022-03-29T23:30:34 | ---
tags:
- code
pretty_name: CoCoNuT-Java(2006)
---
# Dataset Card for CoCoNuT-Java(2006)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- Java
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 45,180 | 3,241,966 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| [
-0.3966997265815735,
-0.7375414371490479,
0.20777571201324463,
0.14693941175937653,
-0.4023740589618683,
0.19261384010314941,
-0.29734063148498535,
-0.5202956795692444,
0.2512573003768921,
0.34601452946662903,
-0.4637823700904846,
-0.5723638534545898,
-0.5065504312515259,
0.233377590775489... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GEM/FairytaleQA | GEM | 2022-10-25T12:58:30Z | 37 | 4 | null | [
"task_categories:other",
"annotations_creators:expert-created",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"question-generation",
"arxiv:2203.13947",
"region:us"
] | 2022-10-25T12:58:30Z | 2022-05-19T15:51:16.000Z | 2022-05-19T15:51:16 | ---
annotations_creators:
- expert-created
language_creators:
- unknown
language:
- en
license:
- unknown
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: FairytaleQA
tags:
- question-generation
---
# Dataset Card for GEM/FairytaleQA
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/uci-soe/FairytaleQAData
- **Paper:** https://arxiv.org/abs/2203.13947
- **Leaderboard:** https://paperswithcode.com/sota/question-generation-on-fairytaleqa
- **Point of Contact:** Ying Xu, Dakuo Wang
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/FairytaleQA).
### Dataset Summary
The FairytaleQA Dataset is an English-language dataset focusing on narrative comprehension of kindergarten to eighth-grade students. Generated by educational experts based on an evidence-based theoretical framework, FairytaleQA consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations. The Dataset was corrected to support both the tasks of Question Generation and Question Answering.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/FairytaleQA')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/FairytaleQA).
#### paper
[ArXiv](https://arxiv.org/abs/2203.13947)
#### authors
Ying Xu (University of California Irvine); Dakuo Wang (IBM Research); Mo Yu (IBM Research); Daniel Ritchie (University of California Irvine); Bingsheng Yao (Rensselaer Polytechnic Institute); Tongshuang Wu (University of Washington); Zheng Zhang (University of Notre Dame); Toby Jia-Jun Li (University of Notre Dame); Nora Bradford (University of California Irvine); Branda Sun (University of California Irvine); Tran Bao Hoang (University of California Irvine); Yisi Sang (Syracuse University); Yufang Hou (IBM Research Ireland); Xiaojuan Ma (Hong Kong Univ. of Sci and Tech); Diyi Yang (Georgia Institute of Technology); Nanyun Peng (University of California Los Angeles); Zhou Yu (Columbia University); Mark Warschauer (University of California Irvine)
## Dataset Overview
### Where to find the Data and its Documentation
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/uci-soe/FairytaleQAData)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ArXiv](https://arxiv.org/abs/2203.13947)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
@inproceedings{xu2022fairytaleqa,
author={Xu, Ying and Wang, Dakuo and Yu, Mo and Ritchie, Daniel and Yao, Bingsheng and Wu, Tongshuang and Zhang, Zheng and Li, Toby Jia-Jun and Bradford, Nora and Sun, Branda and Hoang, Tran Bao and Sang, Yisi and Hou, Yufang and Ma, Xiaojuan and Yang, Diyi and Peng, Nanyun and Yu, Zhou and Warschauer, Mark},
title = {Fantastic Questions and Where to Find Them: Fairytale{QA} -- An Authentic Dataset for Narrative Comprehension},
publisher = {Association for Computational Linguistics},
year = {2022}
}
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Ying Xu, Dakuo Wang
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
ying.xu@uci.edu, dakuo.wang@ibm.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
yes
#### Leaderboard Link
<!-- info: Provide a link to the leaderboard. -->
<!-- scope: periscope -->
[PapersWithCode](https://paperswithcode.com/sota/question-generation-on-fairytaleqa)
#### Leaderboard Details
<!-- info: Briefly describe how the leaderboard evaluates models. -->
<!-- scope: microscope -->
The task was to generate questions corresponding to the given answers and the story context. Success on the Question Generation task is typically measured by achieving a high ROUGE-L score to the reference ground-truth question.
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
[N/A]
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
[N/A]
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
unknown: License information unavailable
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The purpose of this dataset is to help develop systems to facilitate assessment and training of narrative comprehension skills for children in education domain. The dataset distinguishes fine-grained reading skills, such as the understanding of varying narrative elements, and contains high-quality QA-pairs generated by education experts with sufficient training and education domain knowledge to create valid QA-pairs in a consistent way.
This dataset is suitable for developing models to automatically generate questions and QA-Pairs that satisfy the need for a continuous supply of new questions, which can potentially enable large-scale development of AI-supported interactive platforms for the learning and assessment of reading comprehension skills.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Question Generation
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
The task was to generate questions corresponding to the given answers and the story context. Models trained for this task can potentially enable large-scale development of AI-supported interactive platforms for the learning and assessment of reading comprehension skills.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
University of California Irvine
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Ying Xu (University of California Irvine); Dakuo Wang (IBM Research); Mo Yu (IBM Research); Daniel Ritchie (University of California Irvine); Bingsheng Yao (Rensselaer Polytechnic Institute); Tongshuang Wu (University of Washington); Zheng Zhang (University of Notre Dame); Toby Jia-Jun Li (University of Notre Dame); Nora Bradford (University of California Irvine); Branda Sun (University of California Irvine); Tran Bao Hoang (University of California Irvine); Yisi Sang (Syracuse University); Yufang Hou (IBM Research Ireland); Xiaojuan Ma (Hong Kong Univ. of Sci and Tech); Diyi Yang (Georgia Institute of Technology); Nanyun Peng (University of California Los Angeles); Zhou Yu (Columbia University); Mark Warschauer (University of California Irvine)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Schmidt Futures
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Dakuo Wang (IBM Research); Bingsheng Yao (Rensselaer Polytechnic Institute); Ying Xu (University of California Irvine)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `story_name`: a string of the story name to which the story section content belongs. Full story data can be found [here](https://github.com/uci-soe/FairytaleQAData).
- `content`: a string of the story section(s) content related to the experts' labeled QA-pair. Used as the input for both Question Generation and Question Answering tasks.
- `question`: a string of the question content. Used as the input for Question Answering task and as the output for Question Generation task.
- `answer`: a string of the answer content for all splits. Used as the input for Question Generation task and as the output for Question Answering task.
- `gem_id`: a string of id follows GEM naming convention ```GEM-${DATASET_NAME}-${SPLIT-NAME}-${id}``` where id is an incrementing number starting at 1
- `target`: a string of the question content being used for training
- `references`: a list of string containing the question content being used for automatic eval
- `local_or_sum`: a string of either local or summary, indicating whether the QA is related to one story section or multiple sections
- `attribute`: a string of one of character, causal relationship, action, setting, feeling, prediction, or outcome resolution. Classification of the QA by education experts annotators via 7 narrative elements on an established framework
- `ex_or_im`: a string of either explicit or implicit, indicating whether the answers can be directly found in the story content or cannot be directly from the story content.
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
[N/A]
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
A typical data point comprises a question, the corresponding story content, and one answer. Education expert annotators labeled whether the answer is locally relevant to one story section or requires summarization capabilities from multiple story sections, and whether the answers are explicit (can be directly found in the stories) or implicit (cannot be directly found in the story text). Additionally, education expert annotators categorize the QA-pairs via 7 narrative elements from an establish framework.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
{'story_name': 'self-did-it',
'content': '" what is your name ? " asked the girl from underground . " self is my name , " said the woman . that seemed a curious name to the girl , and she once more began to pull the fire apart . then the woman grew angry and began to scold , and built it all up again . thus they went on for a good while ; but at last , while they were in the midst of their pulling apart and building up of the fire , the woman upset the tar - barrel on the girl from underground . then the latter screamed and ran away , crying : " father , father ! self burned me ! " " nonsense , if self did it , then self must suffer for it ! " came the answer from below the hill .',
'answer': 'the woman told the girl her name was self .',
'question': "why did the girl's father think the girl burned herself ?",
'gem_id': 'GEM-FairytaleQA-test-1006',
'target': "why did the girl's father think the girl burned herself ?",
'references': ["why did the girl's father think the girl burned herself ?"],
'local_or_sum': 'local',
'attribute': 'causal relationship',
'ex_or_im': 'implicit'}
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The data is split into a train, validation, and test split randomly. The final split sizes are as follows:
| | Train | Validation | Test |
| ----- | ----- | ----- | ----- |
| # Books | 232 | 23 | 23 |
| # QA-Pairs | 8548 | 1025 |1007 |
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The books are randomly split into train/validation/test splits. We control the ratio of QA-pair numbers in train:validation:test splits close to 8:1:1
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
[N/A]
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
The dataset distinguishes fine-grained reading skills, such as the understanding of varying narrative elements, and contains high-quality QA-pairs generated by education experts with sufficient training and education domain knowledge to create valid QA-pairs in a consistent way.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
This dataset is suitable for developing models to automatically generate questions or QA-pairs that satisfy the need for a continuous supply of new questions, which can potentially enable large-scale development of AI-supported interactive platforms for the learning and assessment of reading comprehension skills.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`data points removed`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
The original data contains two answers by different annotators in validation/test splits, we removed the 2nd answer for GEM version because it is not being used for the Question Generation task.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
[N/A]
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
We are able to measure model's capabilities of generating various types of questions that corresponds to different narrative elements with the FairytaleQA dataset on the Question Generation Task
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The task was to generate questions corresponding to the given answers and the story context. Success on this task is typically measured by achieving a high [ROUGE](https://huggingface.co/metrics/rouge) score to the reference ground-truth questions.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
A [BART-based model](https://huggingface.co/facebook/bart-large) currently achieves a [ROUGE-L of 0.527/0.527](https://github.com/uci-soe/FairytaleQAData) on valid/test splits, which is reported as the baseline experiment for the dataset [paper](https://arxiv.org/pdf/2203.13947.pdf).
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
FairytaleQA was built to focus on comprehension of narratives in the education domain, targeting students from kindergarten to eighth grade. We focus on narrative comprehension for 1. it is a high-level comprehension skill strongly predictive of reading achievement and plays a central role in daily life as people frequently encounter narratives in different forms, 2. narrative stories have a clear structure of specific elements and relations among these elements, and there are existing validated narrative comprehension frameworks around this structure, which provides a basis for developing the annotation schema for our dataset.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The purpose of this dataset is to help develop systems to facilitate assessment and training of narrative comprehension skills for children in education domain.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The fairytale story texts are from the [Project Gutenberg](https://www.gutenberg.org/) website
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
We gathered the text from the Project Gutenberg website, using “fairytale” as the search term.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by data curator
#### Data Preprocessing
<!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
<!-- scope: microscope -->
Due to a large number of fairytales found, we used the most popular stories based on the number of downloads since these stories are presumably of higher quality. To ensure the readability of the text, we made a small number of minor revisions to some obviously outdated vocabulary (e.g., changing “ere” to “before”) and the unconventional use of punctuation (e.g., changing consecutive semi-colons to periods).
These texts were broken down into small sections based on their semantic content by our annotators. The annotators were instructed to split the story into sections of 100-300 words that also contain meaningful content and are separated at natural story breaks. An initial annotator would split the story, and this would be reviewed by a cross-checking annotator. Most of the resulting sections were one natural paragraph of the original text.
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
manually
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
For each story, we evaluated the reading difficulty level using the [textstat](https://pypi.org/project/textstat/) Python package, primarily based on sentence length, word length, and commonness of words. We excluded stories that are at 10th grade level or above.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
expert created
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
2<n<10
#### Rater Qualifications
<!-- info: Describe the qualifications required of an annotator. -->
<!-- scope: periscope -->
All of these annotators have a B.A. degree in education, psychology, or cognitive science and have substantial experience in teaching and reading assessment. These annotators were supervised by three experts in literacy education.
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
2
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
3
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The dataset annotation distinguishes fine-grained reading skills, such as the understanding of varying narrative elements, and contains high-quality QA-pairs generated by education experts with sufficient training and education domain knowledge to create valid QA-pairs in a consistent way.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by data curators
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
The annotators were instructed to imagine that they were creating questions to test elementary or middle school students in the process of reading a complete story. We required the annotators to generate only natural, open-ended questions, avoiding “yes-” or “no-” questions. We also instructed them to provide a diverse set of questions about 7 different narrative elements, and with both implicit and explicit questions.
We asked the annotators to also generate answers for each of their questions. We asked them to provide the shortest possible answers but did not restrict them to complete sentences or short phrases. We also asked the annotators to label which section(s) the question and answer was from.
All annotators received a two-week training in which each of them was familiarized with the coding template and conducted practice coding on the same five stories. The practice QA pairs were then reviewed by the other annotators and the three experts, and discrepancies among annotators were discussed. During the annotation process, the team met once every week to review and discuss each member’s work. All QA pairs were cross-checked by two annotators, and 10% of the QA pairs were additionally checked by the expert supervisor.
For the 46 stories used as the evaluation set, we annotate a second reference answer by asking an annotator to independently read the story and answer the questions generated by others.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
During the annotation process, the team met once every week to review and discuss each member’s work. All QA pairs were cross-checked by two annotators, and 10% of the QA pairs were additionally checked by the expert supervisor.
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
Aside from Question Generation task, the data creators and curators used this data for Question Answering, and QA-Pair Generation tasks, and to identify social stereotypes represented in story narratives.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
The story content is from publically available knowledge website and the annotated QA-pairs are about general knowledge to the story content without references to the author or to any persons
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
yes
#### Maintenance Plan Details
<!-- info: Describe the original dataset's maintenance plan. -->
<!-- scope: microscope -->
We plan to host various splits for the FairytaleQA dataset to better serve various types of research interests. We have the original data for 2 different split approaches including train/validation/test splits and split by fairytale origins. We are also plan to host the dataset on multiple platforms for various tasks.
#### Maintainer Contact Information
<!-- info: Provide contact information of a person responsible for the dataset maintenance -->
<!-- scope: periscope -->
Daniel Ritchie
#### Any Contestation Mechanism?
<!-- info: Does the maintenance plan include a contestation mechanism allowing individuals to request removal fo content? -->
<!-- scope: periscope -->
no mechanism
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
yes - models trained on this dataset
#### Social Impact Observations
<!-- info: Did any of these previous uses result in observations about the social impact of the systems? In particular, has there been work outlining the risks and limitations of the system? Provide links and descriptions here. -->
<!-- scope: microscope -->
[N/A]
#### Changes as Consequence of Social Impact
<!-- info: Have any changes been made to the dataset as a result of these observations? -->
<!-- scope: periscope -->
[N/A]
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
yes
#### Details on how Dataset Addresses the Needs
<!-- info: Describe how this dataset addresses the needs of underserved communities. -->
<!-- scope: microscope -->
From the educational perspective, given that reading comprehension is a multicomponent skill, it is ideal for comprehension questions to be able to identify students’ performance in specific sub-skills, thus allowing teachers to provide tailored guidance.
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
unsure
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
[N/A]
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
[N/A]
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`research use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
We noticed that human results are obtained via cross-estimation between the two annotated answers, thus are underestimated. One possibility for future work is to conduct a large-scale human annotation to collect more answers per question and then leverage the massively annotated answers to better establish a human performance evaluation.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The QA-pairs annotated by education experts are targeting the audience of children from kindergarten to eighth grade, so the difficulty of QA-pairs are not suitable to compare with other existing dataset that are sourced from knowledge graphs or knowledge bases like Wikipedia.
#### Discouraged Use Cases
<!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
<!-- scope: microscope -->
[N/A]
| [
-0.3957572877407074,
-0.8907191753387451,
0.39157620072364807,
-0.04583926498889923,
0.07371261715888977,
-0.0946446880698204,
0.23075275123119354,
-0.2748916745185852,
0.056289494037628174,
0.4644496440887451,
-0.8825488686561584,
-0.5782630443572998,
-0.23610003292560577,
0.1518326699733... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
copenlu/tydiqa_copenlu | copenlu | 2022-08-16T12:10:21Z | 37 | 0 | tydi-qa | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:extended|wikipedia",
"language:ar",
"language:bn",
"language:en",
"language:fi",
"l... | 2022-08-16T12:10:21Z | 2022-08-16T12:04:50.000Z | 2022-08-16T12:04:50 | ---
pretty_name: TyDi QA
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
- bn
- en
- fi
- id
- ja
- ko
- ru
- sw
- te
- th
license:
- apache-2.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: tydi-qa
---
# Dataset Card for "tydiqa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3726.74 MB
- **Size of the generated dataset:** 5812.92 MB
- **Total amount of disk used:** 9539.67 MB
### Dataset Summary
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
the use of translation (unlike MLQA and XQuAD).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### primary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 5757.59 MB
- **Total amount of disk used:** 7620.96 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"annotations": {
"minimal_answers_end_byte": [-1, -1, -1],
"minimal_answers_start_byte": [-1, -1, -1],
"passage_answer_candidate_index": [-1, -1, -1],
"yes_no_answer": ["NONE", "NONE", "NONE"]
},
"document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...",
"document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร",
"document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...",
"language": "thai",
"passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...",
"question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..."
}
```
#### secondary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 55.34 MB
- **Total amount of disk used:** 1918.71 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [394],
"text": ["بطولتين"]
},
"context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...",
"id": "arabic-2387335860751143628-1",
"question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...",
"title": "قائمة نهائيات كأس العالم"
}
```
### Data Fields
The data fields are the same among all splits.
#### primary_task
- `passage_answer_candidates`: a dictionary feature containing:
- `plaintext_start_byte`: a `int32` feature.
- `plaintext_end_byte`: a `int32` feature.
- `question_text`: a `string` feature.
- `document_title`: a `string` feature.
- `language`: a `string` feature.
- `annotations`: a dictionary feature containing:
- `passage_answer_candidate_index`: a `int32` feature.
- `minimal_answers_start_byte`: a `int32` feature.
- `minimal_answers_end_byte`: a `int32` feature.
- `yes_no_answer`: a `string` feature.
- `document_plaintext`: a `string` feature.
- `document_url`: a `string` feature.
#### secondary_task
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | train | validation |
| -------------- | -----: | ---------: |
| primary_task | 166916 | 18670 |
| secondary_task | 49881 | 5077 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{tydiqa,
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year = {2020},
journal = {Transactions of the Association for Computational Linguistics}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | [
-0.7121679186820984,
-0.7192625999450684,
0.3032669425010681,
0.06488599628210068,
-0.2192891538143158,
0.03349018841981888,
-0.32785627245903015,
-0.29552125930786133,
0.5975476503372192,
0.45928576588630676,
-0.7834568023681641,
-0.9103260636329651,
-0.5223679542541504,
0.256907045841217... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
batterydata/battery-device-data-qa | batterydata | 2023-11-06T12:50:19Z | 37 | 0 | null | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-06T12:50:19Z | 2022-09-05T15:30:32.000Z | 2022-09-05T15:30:32 | ---
language:
- en
license:
- apache-2.0
task_categories:
- question-answering
pretty_name: 'Battery Device Question Answering Dataset'
---
# Battery Device QA Data
Battery device records, including anode, cathode, and electrolyte.
Examples of the question answering evaluation dataset:
\{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\}
\{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\}
\{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\}
# Usage
```
from datasets import load_dataset
dataset = load_dataset("batterydata/battery-device-data-qa")
```
Note: in the original BatteryBERT paper, 272 data records were used for evaluation after removing redundant records as well as paragraphs with character length >= 1500. Code is shown below:
```
import json
with open("answers.json", "r", encoding='utf-8') as f:
data = json.load(f)
evaluation = []
for point in data['data']:
paragraphs = point['paragraphs'][0]['context']
if len(paragraphs)<1500:
qas = point['paragraphs'][0]['qas']
for indiv in qas:
try:
question = indiv['question']
answer = indiv['answers'][0]['text']
pairs = (paragraphs, question, answer)
evaluation.append(pairs)
except:
continue
```
# Citation
```
@article{huang2022batterybert,
title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement},
author={Huang, Shu and Cole, Jacqueline M},
journal={J. Chem. Inf. Model.},
year={2022},
doi={10.1021/acs.jcim.2c00035},
url={DOI:10.1021/acs.jcim.2c00035},
pages={DOI: 10.1021/acs.jcim.2c00035},
publisher={ACS Publications}
}
``` | [
-0.34240061044692993,
-0.6582262516021729,
0.5644198060035706,
-0.3689839541912079,
-0.19275054335594177,
0.26112785935401917,
0.20877353847026825,
-0.05299944430589676,
0.22496601939201355,
0.324947327375412,
-0.5167038440704346,
-0.5330389142036438,
0.1857653558254242,
0.0450174808502197... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EMBO/sd-nlp-v2 | EMBO | 2022-09-26T12:47:16Z | 37 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-09-26T12:47:16Z | 2022-09-26T12:38:27.000Z | 2022-09-26T12:38:27 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sled-umich/Conversation-Entailment | sled-umich | 2022-10-11T15:33:09Z | 37 | 4 | null | [
"task_categories:conversational",
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"conversational",
"entailment",
"region:us"
] | 2022-10-11T15:33:09Z | 2022-10-07T18:03:22.000Z | 2022-10-07T18:03:22 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: Conversation-Entailment
size_categories:
- n<1K
source_datasets:
- original
tags:
- conversational
- entailment
task_categories:
- conversational
- text-classification
task_ids: []
---
# Conversation-Entailment
Official dataset for [Towards Conversation Entailment: An Empirical Investigation](https://sled.eecs.umich.edu/publication/dblp-confemnlp-zhang-c-10/). *Chen Zhang, Joyce Chai*. EMNLP, 2010

## Overview
Textual entailment has mainly focused on inference from written text in monologue. Recent years also observed an increasing amount of conversational data such as conversation scripts of meetings, call center records, court proceedings, as well as online chatting. Although conversation is a form of language, it is different from monologue text with several unique characteristics. The key distinctive features include turn-taking between participants, grounding between participants, different linguistic phenomena of utterances, and conversation implicatures. Traditional approaches dealing with textual entailment were not designed to handle these unique conversation behaviors and thus to support automated entailment from conversation scripts. This project intends to address this limitation.
### Download
```python
from datasets import load_dataset
dataset = load_dataset("sled-umich/Conversation-Entailment")
```
* [HuggingFace-Dataset](https://huggingface.co/datasets/sled-umich/Conversation-Entailment)
* [DropBox](https://www.dropbox.com/s/z5vchgzvzxv75es/conversation_entailment.tar?dl=0)
### Data Sample
```json
{
"id": 3,
"type": "fact",
"dialog_num_list": [
30,
31
],
"dialog_speaker_list": [
"B",
"A"
],
"dialog_text_list": [
"Have you seen SLEEPING WITH THE ENEMY?",
"No. I've heard, I've heard that's really great, though."
],
"h": "SpeakerA and SpeakerB have seen SLEEPING WITH THE ENEMY",
"entailment": false,
"dialog_source": "SW2010"
}
```
### Cite
[Towards Conversation Entailment: An Empirical Investigation](https://sled.eecs.umich.edu/publication/dblp-confemnlp-zhang-c-10/). *Chen Zhang, Joyce Chai*. EMNLP, 2010. [[Paper]](https://aclanthology.org/D10-1074/)
```tex
@inproceedings{zhang-chai-2010-towards,
title = "Towards Conversation Entailment: An Empirical Investigation",
author = "Zhang, Chen and
Chai, Joyce",
booktitle = "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2010",
address = "Cambridge, MA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D10-1074",
pages = "756--766",
}
``` | [
-0.24062678217887878,
-1.3861768245697021,
0.48238328099250793,
0.3718079924583435,
-0.18011616170406342,
-0.3647993206977844,
-0.37523970007896423,
-0.521194577217102,
0.40773487091064453,
0.7931989431381226,
-0.5654507875442505,
-0.6911402940750122,
-0.26760104298591614,
0.10780177265405... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jinhybr/rvl_cdip_400_train_val_test | jinhybr | 2022-11-11T15:58:02Z | 37 | 0 | null | [
"region:us"
] | 2022-11-11T15:58:02Z | 2022-11-11T04:01:53.000Z | 2022-11-11T04:01:53 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: letter
1: form
2: email
3: handwritten
4: advertisement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 197669272.0
num_examples: 1600
- name: train
num_bytes: 781258280.0
num_examples: 6400
- name: validation
num_bytes: 191125740.0
num_examples: 1600
download_size: 1101475597
dataset_size: 1170053292.0
---
# Dataset Card for "rvl_cdip_400_train_val_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Format
````
DatasetDict({
test: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 1600
})
train: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 6400
})
validation: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 1600
})
})
```` | [
-0.7950713038444519,
-0.2946273982524872,
-0.024183085188269615,
0.5186424851417542,
-0.3458850085735321,
-0.21894089877605438,
0.0019594950135797262,
-0.009985673241317272,
0.13996484875679016,
0.5040123462677002,
-0.6315764784812927,
-0.7576702833175659,
-0.5402262210845947,
0.1767783015... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NeelNanda/pile-old-tokenized-2b | NeelNanda | 2022-11-13T21:29:57Z | 37 | 0 | null | [
"region:us"
] | 2022-11-13T21:29:57Z | 2022-11-13T21:17:07.000Z | 2022-11-13T21:17:07 | ---
dataset_info:
features:
- name: tokens
sequence: int32
splits:
- name: train
num_bytes: 8200000000
num_examples: 2000000
download_size: 3352864661
dataset_size: 8200000000
---
# Dataset Card for "pile-tokenized-2b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49362578988075256,
-0.42909032106399536,
-0.0788363367319107,
0.44465428590774536,
-0.4833407998085022,
0.14097097516059875,
0.549761950969696,
-0.25791168212890625,
0.9107010960578918,
0.6415266990661621,
-0.5236388444900513,
-0.5886915326118469,
-0.8419055342674255,
-0.412431836128234... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bioscope | bigbio | 2022-12-22T15:44:13Z | 37 | 0 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-2.0",
"region:us"
] | 2022-12-22T15:44:13Z | 2022-11-13T22:07:28.000Z | 2022-11-13T22:07:28 |
---
language:
- en
bigbio_language:
- English
license: cc-by-2.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_2p0
pretty_name: BioScope
homepage: https://rgai.inf.u-szeged.hu/node/105
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for BioScope
## Dataset Description
- **Homepage:** https://rgai.inf.u-szeged.hu/node/105
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The BioScope corpus consists of medical and biological texts annotated for
negation, speculation and their linguistic scope. This was done to allow a
comparison between the development of systems for negation/hedge detection and
scope resolution. The BioScope corpus was annotated by two independent linguists
following the guidelines written by our linguist expert before the annotation of
the corpus was initiated.
## Citation Information
```
@article{vincze2008bioscope,
title={The BioScope corpus: biomedical texts annotated for uncertainty, negation and their scopes},
author={Vincze, Veronika and Szarvas, Gy{"o}rgy and Farkas, Rich{'a}rd and M{'o}ra, Gy{"o}rgy and Csirik, J{'a}nos},
journal={BMC bioinformatics},
volume={9},
number={11},
pages={1--9},
year={2008},
publisher={BioMed Central}
}
```
| [
-0.5073869228363037,
-0.6479723453521729,
0.45745983719825745,
-0.13327312469482422,
-0.3156452775001526,
-0.017350124195218086,
0.03324631229043007,
-0.5515785813331604,
0.6087437868118286,
0.5380198359489441,
-0.5505242347717285,
-1.0125677585601807,
-0.7524622678756714,
0.34217429161071... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/scai_chemical | bigbio | 2022-12-22T15:46:32Z | 37 | 2 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:46:32Z | 2022-11-13T22:11:56.000Z | 2022-11-13T22:11:56 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: SCAI Chemical
homepage: https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpora-for-chemical-entity-recognition.html
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for SCAI Chemical
## Dataset Description
- **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpora-for-chemical-entity-recognition.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
SCAI Chemical is a corpus of MEDLINE abstracts that has been annotated
to give an overview of the different chemical name classes
found in MEDLINE text.
## Citation Information
```
@inproceedings{kolarik:lrec-ws08,
author = {Kol{'a}{r}ik, Corinna and Klinger, Roman and Friedrich, Christoph M and Hofmann-Apitius, Martin and Fluck, Juliane},
title = {Chemical Names: {T}erminological Resources and Corpora Annotation},
booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining},
year = {2008},
}
```
| [
-0.023732729256153107,
-0.22370199859142303,
0.553432822227478,
-0.08138785511255264,
-0.13945819437503815,
0.06258050352334976,
-0.3484654724597931,
-0.4197929799556732,
0.6544278860092163,
0.3639945685863495,
-0.5077424645423889,
-0.7190136313438416,
-0.5839383006095886,
0.61064583063125... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/swedish_medical_ner | bigbio | 2022-12-22T15:46:59Z | 37 | 2 | null | [
"multilinguality:monolingual",
"language:sv",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-12-22T15:46:59Z | 2022-11-13T22:12:24.000Z | 2022-11-13T22:12:24 |
---
language:
- sv
bigbio_language:
- Swedish
license: cc-by-sa-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_SA_4p0
pretty_name: Swedish Medical NER
homepage: https://github.com/olofmogren/biomedical-ner-data-swedish/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for Swedish Medical NER
## Dataset Description
- **Homepage:** https://github.com/olofmogren/biomedical-ner-data-swedish/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER
swedish_medical_ner is Named Entity Recognition dataset on medical text in Swedish.
It consists three subsets which are in turn derived from three different sources
respectively: the Swedish Wikipedia (a.k.a. wiki), Läkartidningen (a.k.a. lt),
and 1177 Vårdguiden (a.k.a. 1177). While the Swedish Wikipedia and Läkartidningen
subsets in total contains over 790000 sequences with 60 characters each,
the 1177 Vårdguiden subset is manually annotated and contains 927 sentences,
2740 annotations, out of which 1574 are disorder and findings, 546 are
pharmaceutical drug, and 620 are body structure.
Texts from both Swedish Wikipedia and Läkartidningen were automatically annotated
using a list of medical seed terms. Sentences from 1177 Vårdguiden were manuually
annotated.
## Citation Information
```
@inproceedings{almgren-etal-2016-named,
author = {
Almgren, Simon and
Pavlov, Sean and
Mogren, Olof
},
title = {Named Entity Recognition in Swedish Medical Journals with Deep Bidirectional Character-Based LSTMs},
booktitle = {Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016)},
publisher = {The COLING 2016 Organizing Committee},
pages = {30-39},
year = {2016},
month = {12},
url = {https://aclanthology.org/W16-5104},
eprint = {https://aclanthology.org/W16-5104.pdf}
}
```
| [
-0.16960257291793823,
-0.38549959659576416,
0.5250692963600159,
-0.14480265974998474,
-0.5752555131912231,
-0.23698396980762482,
-0.1477569043636322,
-0.6364090442657471,
0.4025685489177704,
0.6516337990760803,
-0.39225226640701294,
-0.9202246069908142,
-0.46923550963401794,
0.695830225944... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RussianNLP/wikiomnia | RussianNLP | 2023-04-07T06:43:59Z | 37 | 4 | null | [
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:ru",
"license:apache-2.0",
"wikipedia",
"wikiomnia",
"squad",
"QA",
"arxiv:2204.08009",
"region:us"
] | 2023-04-07T06:43:59Z | 2022-12-16T16:03:40.000Z | 2022-12-16T16:03:40 | ---
license: apache-2.0
dataset_info:
- config_name: wikiomnia_ruT5_raw
features:
- name: title
dtype: string
- name: categories
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: batch_id
dtype: string
splits:
- name: dev
num_bytes: 600356136
num_examples: 266295
- name: test
num_bytes: 572651444
num_examples: 267751
download_size: 1204094848
dataset_size: 1173007580
- config_name: wikiomnia_ruT5_filtered
features:
- name: title
dtype: string
- name: categories
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: batch_id
dtype: string
splits:
- name: train
num_bytes: 4157093224
num_examples: 2088027
download_size: 4278635364
dataset_size: 4157093224
- config_name: wikiomnia_ruGPT3_filtered
features:
- name: title
dtype: string
- name: categories
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: batch_id
dtype: string
splits:
- name: train
num_bytes: 338607635
num_examples: 173314
download_size: 348694031
dataset_size: 338607635
- config_name: wikiomnia_ruGPT3_raw
features:
- name: title
dtype: string
- name: categories
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: batch_id
dtype: string
splits:
- name: train_batch1
num_bytes: 553204785
num_examples: 260808
- name: train_batch2
num_bytes: 542823205
num_examples: 263599
- name: train_batch3
num_bytes: 582321994
num_examples: 269736
- name: train_batch4
num_bytes: 543315355
num_examples: 265948
- name: train_batch5
num_bytes: 513288049
num_examples: 268466
- name: train_batch6
num_bytes: 943556173
num_examples: 512147
- name: train_batch7
num_bytes: 929464509
num_examples: 508149
- name: train_batch8
num_bytes: 915128725
num_examples: 507559
- name: train_batch9
num_bytes: 926443048
num_examples: 504292
- name: train_batch10
num_bytes: 834958539
num_examples: 463812
- name: train_batch11
num_bytes: 509866027
num_examples: 287770
- name: train_batch12
num_bytes: 478843738
num_examples: 271410
- name: train_batch13
num_bytes: 757068702
num_examples: 385730
- name: train_batch14
num_bytes: 575937629
num_examples: 304110
- name: train_batch15
num_bytes: 517092031
num_examples: 277507
- name: train_batch16
num_bytes: 759363156
num_examples: 402203
- name: train_batch17
num_bytes: 860544388
num_examples: 466572
- name: train_batch18
num_bytes: 935985528
num_examples: 518348
- name: train_batch19
num_bytes: 936782197
num_examples: 514307
- name: train_batch20
num_bytes: 874299949
num_examples: 487238
download_size: 14939875008
dataset_size: 14490287727
- config_name: wikiomnia_ruT5_raw_train
features:
- name: title
dtype: string
- name: categories
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: batch_id
dtype: string
splits:
- name: train_batch3
num_bytes: 612693602
num_examples: 271391
- name: train_batch4
num_bytes: 570286147
num_examples: 265947
- name: train_batch5
num_bytes: 552502041
num_examples: 274650
- name: train_batch6
num_bytes: 1017066184
num_examples: 525224
- name: train_batch7
num_bytes: 972351430
num_examples: 509615
- name: train_batch8
num_bytes: 973314180
num_examples: 516828
- name: train_batch9
num_bytes: 981651841
num_examples: 512709
- name: train_batch10
num_bytes: 880664685
num_examples: 469512
- name: train_batch11
num_bytes: 543971388
num_examples: 294631
- name: train_batch12
num_bytes: 503939060
num_examples: 273526
- name: train_batch13
num_bytes: 794421530
num_examples: 392021
- name: train_batch14
num_bytes: 610815879
num_examples: 311452
- name: train_batch15
num_bytes: 540225492
num_examples: 278677
- name: train_batch16
num_bytes: 804003566
num_examples: 411192
- name: train_batch17
num_bytes: 903347135
num_examples: 469871
- name: train_batch18
num_bytes: 995239085
num_examples: 528301
- name: train_batch19
num_bytes: 1003402360
num_examples: 522264
- name: train_batch20
num_bytes: 948137237
num_examples: 499866
download_size: 14634332336
dataset_size: 14208032842
task_categories:
- question-answering
language:
- ru
tags:
- wikipedia
- wikiomnia
- squad
- QA
pretty_name: WikiOmnia
size_categories:
- 1M<n<10M
---
# Dataset Card for "Wikiomnia"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/RussianNLP](https://github.com/RussianNLP)
- **Paper:** [WikiOmnia: filtration and evaluation of the generated QA corpus on the whole Russian Wikipedia](https://arxiv.org/abs/2204.08009)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
We present the WikiOmnia dataset, a new publicly available set of QA-pairs and corresponding Russian Wikipedia article summary sections, composed with a fully automated generative pipeline. The dataset includes every available article from Wikipedia for the Russian language. The WikiOmnia pipeline is available open-source and is also tested for creating SQuAD-formatted QA on other domains, like news texts, fiction, and social media. The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).
WikiOmnia consists of 2 parts:
1. the voluminous, automatically generated part: 15,9 million triplets consisting of the original article summary, a corresponding generated question and a generated answer;
2. the filtered part: the subsample of 3,5 million triplets, fully verified with automatic means
Wikiomnia adheres to a standard SQuAD format problem, resulting in triplets "text paragraph - question based on paragraph - answer from the paragraph", see the following example:
**Original Wikipedia paragraph**: Коити Масимо (яп. Масимо Ко:ити) — известный режиссёр аниме и основатель японской анимационной студии Bee Train. С
момента основания студии он руководит производством почти всех её картин, а также время от времени принимает участие в работе над анимацией и музыкой.
**English translation**: Koichi Mashimo is a famous anime director and the founder of the Japanese animation studio Bee Train. Since the creation of the studio, he directed almost all studio’s works, and he
also sometimes participates in art and sound tasks.
**Generated question (ruT5)**: Кто является основателем японской анимационной студии Bee Train?
**Generated answer (ruT5)**: Коити Масимо
**English QA translation**: Who is the founder of the Japanese animation studio Bee Train? Koichi Mashimo
## Dataset Creation
Models used for dataset generation:
- [ruT5](https://huggingface.co/sberbank-ai/ruT5-large) large fine-tuned on SberQuaD
- [ruGPT-3](https://huggingface.co/sberbank-ai/rugpt3xl) XL fine-tuned on SberQuaD
- [ruBERT](http://docs.deeppavlov.ai/en/master/features/models/squad.html) DeepPavlov tuned for QA tasks
Source: Wikipedia version March 2021
Special tokens: <[TEXT]>, <[QUESTION]>, <[ANSWER]>
The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-
large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).

## Additional Information
### Licensing Information
[Apache 2.0 license](https://github.com/RussianNLP/WikiOmnia/blob/main/LICENSE)
### Citation Information
```
@inproceedings{pisarevskaya-shavrina-2022-wikiomnia,
title = "{W}iki{O}mnia: filtration and evaluation of the generated {QA} corpus on the whole {R}ussian {W}ikipedia",
author = "Pisarevskaya, Dina and
Shavrina, Tatiana",
booktitle = "Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.gem-1.10",
pages = "125--135",
abstract = "The General QA field has been developing the methodology referencing the Stanford Question answering dataset (SQuAD) as the significant benchmark. Compiling factual questions datasets requires manual annotations, limiting the training data{'}s potential size. We present the WikiOmnia dataset, a new publicly available set of QA pairs and corresponding Russian Wikipedia article summary sections, composed with a fully automated generation and filtration pipeline. To ensure high quality of generated QA pairs, diverse manual and automated evaluation techniques were applied. The WikiOmnia pipeline is available open-source and is also tested for creating SQuAD-formatted QA on other domains, like news texts, fiction, and social media. The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).",
}
```
### Contributions
Thanks to [@Deenochka](https://github.com/deenochka), [@TatianaShavrina](https://github.com/TatianaShavrina) | [
-0.5154334306716919,
-0.6824716329574585,
0.3623605966567993,
0.17442668974399567,
-0.26767703890800476,
0.009249325841665268,
-0.3806893229484558,
-0.287812203168869,
0.37943360209465027,
0.2365928292274475,
-0.9038906693458557,
-0.544525146484375,
-0.303223192691803,
0.3913659453392029,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qag_koquad | lmqg | 2022-12-18T08:03:53Z | 37 | 2 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_koquad",
"language:ko",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-18T08:03:53Z | 2022-12-18T07:05:17.000Z | 2022-12-18T07:05:17 | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: ko
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_koquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_koquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the KOQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Korean (ko)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""3.13 만세운동" 은 1919년 3.13일 전주에서 일어난 만세운동이다. 지역 인사들과 함께 신흥학교 학생들이 주도적인 역할을 하며, 만세운동을 이끌었다. 박태련, 김신극 등 전주 지도자들은 군산에서 4일과 5일 독립만세 시위가 감행됐다는 소식에 듣고 준비하고 있었다. 천도교와 박태련 신간회 총무집에서 필요한 태극기를 인쇄하기로 했었다. 서울을 비롯한 다른 지방에서 시위가 계속되자 일본경찰은 신흥학교와 기전학교를 비롯한 전주시내 학교에 강제 방학조치를 취했다. 이에 최종삼 등 신흥학교 학생 5명은 밤을 이용해 신흥학교 지하실에서 태극기 등 인쇄물을 만들었다. 준비를 마친 이들은 13일 장터로 모이기 시작했고, 채소가마니로 위장한 태극기를 장터로 실어 나르고 거사 직전 시장 입구인 완산동과 전주교 건너편에서 군중들에게 은밀히 배부했다. 낮 12시20분께 신흥학교와 기전학교 학생 및 천도교도 등은 태극기를 들고 만세를 불렀다. 남문 밖 시장, 제2보통학교(현 완산초등학교)에서 모여 인쇄물을 뿌리며 시가지로 구보로 행진했다. 시위는 오후 11시까지 서너차례 계속됐다. 또 다음날 오후 3시에도 군중이 모여 만세를 불렀다. 이후 고형진, 남궁현, 김병학, 김점쇠, 이기곤, 김경신 등 신흥학교 학생들은 시위를 주도했다는 혐의로 모두 실형 1년을 언도 받았다. 이외 신흥학교 학생 3명은 일제의 고문에 옥사한 것으로 알려졌다. 또 시위를 지도한 김인전 목사는 이후 중국 상해로 거처를 옮겨 임시정부에서 활동했다. 현재 신흥학교 교문 옆에 만세운동 기념비가 세워져 있다.",
"questions": [ "만세운동 기념비가 세워져 있는 곳은?", "일본경찰의 강제 방학조치에도 불구하고 학생들은 신흥학교 지하실에 모여서 어떤 인쇄물을 만들었는가?", "여러 지방에서 시위가 일어나자 일본경찰이 전주시내 학교에 감행한 조치는 무엇인가?", "지역인사들과 신흥고등학교 학생들이 주도적인 역할을 한 3.13 만세운동이 일어난 해는?", "신흥학교 학생들은 시위를 주도했다는 혐의로 모두 실형 몇년을 언도 받았는가?", "만세운동에서 주도적인 역할을 한 이들은?", "1919년 3.1 운동이 일어난 지역은 어디인가?", "3.13 만세운동이 일어난 곳은?" ],
"answers": [ "신흥학교 교문 옆", "태극기", "강제 방학조치", "1919년", "1년", "신흥학교 학생들", "전주", "전주" ],
"questions_answers": "question: 만세운동 기념비가 세워져 있는 곳은?, answer: 신흥학교 교문 옆 | question: 일본경찰의 강제 방학조치에도 불구하고 학생들은 신흥학교 지하실에 모여서 어떤 인쇄물을 만들었는가?, answer: 태극기 | question: 여러 지방에서 시위가 일어나자 일본경찰이 전주시내 학교에 감행한 조치는 무엇인가?, answer: 강제 방학조치 | question: 지역인사들과 신흥고등학교 학생들이 주도적인 역할을 한 3.13 만세운동이 일어난 해는?, answer: 1919년 | question: 신흥학교 학생들은 시위를 주도했다는 혐의로 모두 실형 몇년을 언도 받았는가?, answer: 1년 | question: 만세운동에서 주도적인 역할을 한 이들은?, answer: 신흥학교 학생들 | question: 1919년 3.1 운동이 일어난 지역은 어디인가?, answer: 전주 | question: 3.13 만세운동이 일어난 곳은?, answer: 전주"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9600 | 960 | 4442|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.8342822790145874,
-0.9040916562080383,
0.5008871555328369,
0.16809536516666412,
-0.4211672842502594,
0.038282644003629684,
0.2703695297241211,
-0.16903197765350342,
0.4820159077644348,
0.3535682260990143,
-0.6032642126083374,
-0.4538024961948395,
-0.49750179052352905,
0.1820351928472519... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
emozilla/soda_synthetic_dialogue | emozilla | 2023-02-07T03:54:33Z | 37 | 7 | null | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|allenai/soda",
"language:en",
"license:mit",
"open-assistant",
"conv... | 2023-02-07T03:54:33Z | 2023-02-07T01:55:49.000Z | 2023-02-07T01:55:49 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- machine-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: "SODA Synthetic Dialogue"
size_categories:
- 1M<n<10M
source_datasets:
- extended|allenai/soda
tags:
- open-assistant
- conversations-summarization
task_categories:
- conversational
task_ids:
- dialogue-generation
dataset_info:
features:
- name: conversation
dtype: string
config_name: soda_synthetic_dialogue_dialogue_modeling
splits:
- name: train
num_bytes: 1524941476
num_examples: 1191582
- name: test
num_bytes: 190565707
num_examples: 148968
- name: validation
num_bytes: 187161878
num_examples: 146346
download_size: 1952926579
dataset_size: 1902669061
---
# Dataset Card for 🥤SODA Synthetic Dialogue
## Dataset Description
- **Repository:**
[Code](https://github.com/LAION-AI/Open-Assistant/openassistant/datasets/soda_synthetic_dialogue/)
## Dataset Summary
🥤SODA Synthetic Dialogue is a set of synthetic dialogues between Assistant and
User. In each conversation, User asks Assistant to perform summarization or
story generation tasks based on a snippet of an existing dialogue, story, or
from a title or theme.
This data was created by synthesizing the dialogues in
[🥤Soda](https://huggingface.co/datasets/allenai/soda) and applying a set of
templates to generate the conversation. The original research paper can be found
[here](https://arxiv.org/pdf/2212.10465v1.pdf).
Example:
```
User: Can you write the next few lines of dialogue for this scene:
Cornell: I know what you're thinking, Dontrell. I can sense your feelings and know exactly what you're going to say or do next. You're thinking about how much you miss your family, and how you wish you could see them again. You're wondering if there's anything you can do to make that happen. And you're also feeling a little guilty, because you know that if you could see them again, it would only be for a short while before they died.
Dontrell: Cornell, how can you possibly know all that? It's like you can read my mind!
Assistant: Sure, the next dialogue for this scene could be:
Cornell: I told you, I can sense your feelings. It's like I have a connection to your thoughts and emotions. I'm not sure why or how it works, but ever since we met, I've been able to understand what you're thinking and feeling pretty easily.
Dontrell: Wow...that's amazing! And a little bit scary too. But it does explain why we've always been so close - even though we've only known each other for a few months.
Cornell: Yeah, I guess our connection is pretty strong. But don't worry, Dontrell - I'll never use it to manipulate or control you in any way. I promise."
Dontrell: That's a relief! I was starting to feel a little bit like you could read my mind, and it was kind of creepy. But if you promise not to use your abilities to control me, then I'm okay with it.
Cornell: I swear, Dontrell. I'll never use my abilities to control or manipulate you in any way. You have my word.
User: And a title?
Assistant: Sure, a title for this dialogue could be:
Cornell is intuitive. Cornell reads Dontrell's mind.
User: How about a short description?
Assistant: Sure, a short description for this dialogue could be:
Cornell knows what Dontrell is thinking. He can sense her feelings and know exactly what she is going to say or do next.
User: What would be one theme of this story?
Assistant: One theme of this story could be:
being intuitive
```
## Usage
The dataset contains one configuration, `dialogue_modeling`, which has a single
text `conversation` feature.
## Source data
The script to synthesize this dataset was originally created by
[ontocord](https://github.com/ontocord) in
[this Colab notebook](https://colab.research.google.com/drive/1Sw3px5dP8whdqT7QMNoqwmqIasZkMbJi?usp=sharing)
and prepared for Hugging Face by
[Jeffrey Quesnelle](https://github.com/jquesnelle/).
## Citation
Please cite our work if you find the resources in this repository useful:
```
@article{ontocord2023sodasynth,
author = {ontocord and Jeffrey Quesnelle},
title = {SODA Synthetic Dialogue},
year = {2023}
}
``` | [
-0.4812048673629761,
-0.8667908906936646,
0.6718685030937195,
0.0023351204581558704,
-0.1576988846063614,
0.2074228972196579,
-0.21044494211673737,
-0.21978452801704407,
0.5087113380432129,
0.33953243494033813,
-0.9367687702178955,
-0.4203988015651703,
-0.1435336321592331,
0.34057220816612... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/AID_MultiLabel | jonathan-roberts1 | 2023-04-03T16:38:58Z | 37 | 0 | null | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:cc0-1.0",
"region:us"
] | 2023-04-03T16:38:58Z | 2023-02-28T15:22:36.000Z | 2023-02-28T15:22:36 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
sequence:
class_label:
names:
'0': airplane
'1': bare soil
'2': buildings
'3': cars
'4': chaparral
'5': court
'6': dock
'7': field
'8': grass
'9': mobile home
'10': pavement
'11': sand
'12': sea
'13': ship
'14': tanks
'15': trees
'16': water
splits:
- name: train
num_bytes: 278244208
num_examples: 3000
download_size: 278126146
dataset_size: 278244208
license: cc0-1.0
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "AID_MultiLabel"
## Dataset Description
- **Paper:** [AID: A benchmark data set for performance evaluation of aerial scene classification](https://ieeexplore.ieee.org/iel7/36/4358825/07907303.pdf)
- **Paper:** [Relation Network for Multi-label Aerial Image Classification]()
### Licensing Information
CC0: Public Domain
## Citation Information
Imagery:
[AID: A benchmark data set for performance evaluation of aerial scene classification](https://ieeexplore.ieee.org/iel7/36/4358825/07907303.pdf)
Multilabels:
[Relation Network for Multi-label Aerial Image Classification](https://ieeexplore.ieee.org/iel7/36/4358825/08986556.pdf)
```
@article{xia2017aid,
title = {AID: A benchmark data set for performance evaluation of aerial scene classification},
author = {Xia, Gui-Song and Hu, Jingwen and Hu, Fan and Shi, Baoguang and Bai, Xiang and Zhong, Yanfei and Zhang, Liangpei and Lu, Xiaoqiang},
year = 2017,
journal = {IEEE Transactions on Geoscience and Remote Sensing},
publisher = {IEEE},
volume = 55,
number = 7,
pages = {3965--3981}
}
@article{hua2019relation,
title = {Relation Network for Multi-label Aerial Image Classification},
author = {Hua, Yuansheng and Mou, Lichao and Zhu, Xiao Xiang},
year = {DOI:10.1109/TGRS.2019.2963364},
journal = {IEEE Transactions on Geoscience and Remote Sensing}
}
``` | [
-0.7127482295036316,
0.05645154416561127,
-0.18689891695976257,
0.2009725123643875,
-0.1591198593378067,
-0.20570413768291473,
0.06745745986700058,
-0.4749458134174347,
-0.16957077383995056,
0.15851201117038727,
-0.41876447200775146,
-0.4671592712402344,
-0.6062270998954773,
0.127763256430... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/acute_inflammation | mstz | 2023-04-15T11:37:39Z | 37 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:100<n<1K",
"language:en",
"acute_inflammation",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-15T11:37:39Z | 2023-04-05T11:13:27.000Z | 2023-04-05T11:13:27 | ---
language:
- en
tags:
- acute_inflammation
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Acute Inflammation
size_categories:
- 100<n<1K
task_categories:
- tabular-classification
configs:
- inflammation
- nephritis
- bladder
---
# Acute Inflammation
The [Acute Inflammation dataset](https://archive.ics.uci.edu/ml/datasets/Acute+Inflammations) from the [UCI ML repository](https://archive-beta.ics.uci.edu).
Predict whether the patient has an acute inflammation.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| inflammation | Binary classification | Does the patient have an acute inflammation? |
| nephritis | Binary classification | Does the patient have a nephritic pelvis? |
| bladder | Binary classification | Does the patient have bladder inflammation? |
nephritis
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/acute_inflammation", "inflammation")["train"]
```
# Features
Target feature changes according to the selected configuration and is always in last position in the dataset.
| **Feature** | **Type** |
|---------------------------------------|---------------|
| `temperature` | `[float64]` |
| `has_nausea` | `[bool]` |
| `has_lumbar_pain` | `[bool]` |
| `has_urine_pushing` | `[bool]` |
| `has_micturition_pains` | `[bool]` |
| `has_burnt_urethra` | `[bool]` |
| `has_inflammed_bladder` | `[bool]` |
| `has_nephritis_of_renal_pelvis` | `[bool]` |
| `has_acute_inflammation` | `[int8]` | | [
-0.2256852388381958,
-0.3603340685367584,
0.6835135817527771,
0.28592151403427124,
-0.3343653976917267,
-0.13294696807861328,
0.24117609858512878,
-0.2888602614402771,
0.5291900634765625,
0.49562808871269226,
-0.25948694348335266,
-0.8352102041244507,
-0.9880150556564331,
0.582648515701294... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
c-s-ale/dolly-15k-instruction-alpaca-format | c-s-ale | 2023-04-13T06:08:38Z | 37 | 20 | null | [
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-3.0",
"instruction",
"region:us"
] | 2023-04-13T06:08:38Z | 2023-04-13T06:01:38.000Z | 2023-04-13T06:01:38 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: category
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 12271354
num_examples: 15015
download_size: 7801648
dataset_size: 12271354
license: cc-by-3.0
language:
- en
tags:
- instruction
pretty_name: Databricks Dolly 15k (Alpaca format, citations removed)
size_categories:
- 10K<n<100K
---
# Dataset Description
- **Blog:** https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm
- **Repo:** https://github.com/databrickslabs/dolly
# Databricks Dolly 15k Dataset with citations removed and in Alpaca Format
**NOTE**
This is a reupload of the Databricks dataset found [here](https://github.com/databrickslabs/dolly/tree/master/data), but modified to be in Alpaca format, and with the citation numbers removed.
This work is not my own, and all credit goes to Databricks.
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context` field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors. | [
-0.4406491816043854,
-1.0700726509094238,
0.20282414555549622,
0.18986083567142487,
-0.13628897070884705,
-0.1297755390405655,
-0.293658971786499,
-0.1398894041776657,
0.04377928748726845,
0.5069402456283569,
-0.6437726616859436,
-0.6217847466468811,
-0.30533647537231445,
0.317178279161453... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Blgn94/mongolian-ner | Blgn94 | 2023-05-01T08:29:43Z | 37 | 0 | null | [
"region:us"
] | 2023-05-01T08:29:43Z | 2023-05-01T08:29:42.000Z | 2023-05-01T08:29:42 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
splits:
- name: train
num_bytes: 4006355
num_examples: 10162
download_size: 1026335
dataset_size: 4006355
---
# Dataset Card for "mongolian-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5558216571807861,
-0.24331164360046387,
-0.12358680367469788,
0.14543955028057098,
-0.1684168577194214,
-0.03428998589515686,
0.19015896320343018,
-0.26120778918266296,
0.8495486974716187,
0.4810601472854614,
-0.7888010740280151,
-0.9971447587013245,
-0.548938512802124,
-0.1164753362536... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hhu-dsml/emowoz | hhu-dsml | 2023-06-01T13:23:58Z | 37 | 1 | emowoz-1 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-analysis",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:MultiWOZ",
"source_datasets:Original (human-... | 2023-06-01T13:23:58Z | 2023-05-31T11:55:27.000Z | 2023-05-31T11:55:27 | ---
license: cc-by-nc-4.0
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
multilinguality:
- monolingual
source_datasets:
- MultiWOZ
- Original (human-machine interaction dialogues)
pretty_name: EmoWOZ
task_ids:
- sentiment-classification
- sentiment-analysis
paperswithcode_id: emowoz-1
configs:
- emowoz
- multiwoz
- dialmage
dataset_info:
- config_name: emowoz
features:
- name: dialogue_id
dtype: string
- name: log
sequence:
- name: text
dtype: string
- name: emotion
dtype: int32
splits:
- name: train
num_bytes: 10661603
num_examples: 9233
- name: validation
num_bytes: 1391634
num_examples: 1100
- name: test
num_bytes: 1409633
num_examples: 1100
- config_name: multiwoz
features:
- name: dialogue_id
dtype: string
- name: log
sequence:
- name: text
dtype: string
- name: emotion
dtype: int32
splits:
- name: train
num_bytes: 10661603
num_examples: 9233
- name: validation
num_bytes: 1391634
num_examples: 1100
- name: test
num_bytes: 1409633
num_examples: 1100
- config_name: dialmage
features:
- name: dialogue_id
dtype: string
- name: log
sequence:
- name: text
dtype: string
- name: emotion
dtype: int32
splits:
- name: train
num_bytes: 10661603
num_examples: 9233
- name: validation
num_bytes: 1391634
num_examples: 1100
- name: test
num_bytes: 1409633
num_examples: 1100
---
# Dataset Card for EmoWOZ Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [EmoWOZ Dataset repository](https://zenodo.org/record/6506504), [EmoWOZ Benchmark repository](https://gitlab.cs.uni-duesseldorf.de/general/dsml/emowoz-public)
- **Paper:** [EmoWOZ: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems](https://aclanthology.org/2022.lrec-1.436/)
- **Leaderboard:** [Papers with Code leaderboard for EmoWOZ Dataset](https://paperswithcode.com/dataset/emowoz-1)
- **Point of Contact:** [Shutong Feng](mailto:shutong.feng@hhu.de)
### Dataset Summary
EmoWOZ is based on [MultiWOZ, a multi-domain task-oriented dialogue dataset](https://github.com/budzianowski/multiwoz). It contains more than 11K task-oriented dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues (DialMAGE) within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. There are 7 emotion labels, which are adapted from the OCC emotion models: _Neutral_, _Satisfied_, _Dissatisfied_, _Excited_, _Apologetic_, _Fearful_, _Abusive_.
Some of the statistics about the dataset:
| Metirc | Value |
| ---------- | ---------------- |
| # Dialogues | 11434 |
| # Turns | 167234 |
| # Annotations | 83617 |
| # Unique Tokens | 28417 |
| Average Turns per Dialogue | 14.63 |
| Average Tokens per Turn | 12.78 |
Emotion Distribution in EmoWOZ and subsets:
| Emotion | EmoWOZ | # MultiWOZ | DialMAGE |
| ---------- | ---------------- | ---------- | ---------------- |
| Neutral | 58,656 | 51,426 | 7,230 |
| Satisfied | 17,532 | 17,061 | 471 |
| Dissatisfied | 5,117 | 914 | 4,203 |
| Excited | 971 | 860 | 111 |
| Apologetic | 840 | 838 | 2 |
| Fearful | 396 | 381 | 15 |
| Satisfied | 105 | 44 | 61 |
### Supported Tasks and Leaderboards
- 'Emotion Recognition in Conversations': See the [Papers With Code leaderboard](hhttps://paperswithcode.com/sota/emotion-recognition-in-conversation-on-emowoz) for more models.
- 'Additional Classification Tasks': According to the initial benchmark [paper](https://aclanthology.org/2022.lrec-1.436/), emotion labels in EmoWOZ can be mapped to sentiment polarities. Therefore, sentiment classification and sentiment analysis can also be performed. Since EmoWOZ has two subsets: MultiWOZ (human-to-human) and DialMAGE (human-to-machine), it is also possible to perform cross-domain emotion/sentiment recognition.
### Languages
Only English is represented in the data.
## Dataset Structure
### Data Instances
For each instance, there is a string id for the dialogue, a list of strings for the dialogue utterances, and a list of integers for the emotion labels.
```
{
'dialogue_id': 'PMUL4725.json',
'log': {
'text': [
'Hi, i am looking for some museums that I could visit when in town, could you help me find some?',
'Is there an area of town you prefer?',
"No, I don't care.",
"I recommend the Cafe Jello Gallery in the west. It's free to enter!",
'I also need a place to stay',
'Great! There are 33 hotels in the area. What area of town would you like to stay in? What is your preference on price?',
" The attraction should be in the type of museum. I don't care about the price range or the area",
'Just to clarify - did you need a different museum? Or a hotel?',
'That museum from earlier is fine, I just need their postalcode. I need a hotel two in the west and moderately priced. ',
"The postal code for Cafe Jello Gallery is cb30af. Okay, Hobson's House matches your request. ",
'Do they have internet?',
'Yes they do. Would you like me to book a room for you?',
"No thanks. I will do that later. Can you please arrange for taxi service from Cafe Jello to Hobson's House sometime after 04:00?",
'I was able to book that for you. Be expecting a grey Tesla. If you need to reach them, please call 07615015749. ',
'Well that you that is all i need for today',
'Your welcome. Have a great day!'
],
'emotion': [0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1]
}
}
```
### Data Fields
- `dialogue_id`: a string representing the unique id of the dialogue. For MultiWOZ dialogues, the original id is keeped. For DialMAGE dialogues, all ids are in the format of DMAGExxx.json where xxx is an integer of variable number of digits.
- `text`: a list of strings containing the dialogue turns.
- `emotion`: a list of integers containing the sequence of emotion labels for the dialogue. Specificially,
- -1: system turns with unlabelled emotion
- 0: neutral, no emotion expressed
- 1: fearful, or sad/disappointed, negative emotion elicited by facts/events, which is out of the system's control
- 2: dissatisfied, negative emotion elicited by the system, usually after the system's poor performance
- 3: apologetic, negative emotion from the user, usually expressing apologies for causing confusion or changing search criteria
- 4: abusive, negative emotion elicited by the system, expressed in an impolite way
- 5: excited, positive emotion elicited by facts/events
- 6: satisfied, positive emotion elicited by the system
### Data Splits
The EmoWOZ dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for the dataset.
| Dataset Split | Number of Emotion Annotations in Split| Of Which from MultiWOZ | Of Which from DialMage |
| ------------- | ----------------------------| ------------- | ------------------------------------------- |
| Train | 66,474 | 56,778 | 9696 |
| Validation | 8,509 | 7,374 | 1135 |
| Test | 8,634 | 7,372 | 1262 |
## Dataset Creation
### Curation Rationale
EmoWOZ was built on top of MultiWOZ because MultiWOZ is a well-established dataset for task-oriented dialogue modelling, allowing further study of the impact of user emotions on downstream tasks. The additional 1000 human-machine dialogues (DialMAGE) was collected to improve the emotion coverage and emotional expression diversity.
### Source Data
#### Initial Data Collection and Normalization
MultiWOZ dialogues were inherited from the work of [MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling](https://aclanthology.org/D18-1547/).
DialMAGE dialogues were collected from a human evaluation of an RNN-based policy trained on MultiWOZ on Amazon Mechanical Turk platform.
#### Who are the source language producers?
The text of both MultiWOZ and DialMAGE was written by workers on Amazon Mechanical Turk platform. For detailed data collection set-ups, please refer to their respective publications.
### Annotations
All dialogues take place between a _user_ and a _system_ (or an _operator_). The dialogue always starts with a user turn, which is always followed by a system response, and ends with a system turn. Only user turns are annotated with a emotion label.
#### Annotation process
Each user utterance was annotated by three annotators. The final label was determined by majority voting. If there was no agreement, the final label would be resolved manually.
For details such as annotator selection process and quality assurance methods, please refer to the EmoWOZ publication.
#### Who are the annotators?
Annotators are crowdsource workers on Amazon Mechanical Turk platform.
### Personal and Sensitive Information
All annotators are anonymised. There is no personal information in EmoWOZ.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop task-oriented dialogue systems that can perceive human emotions and avoid abusive behaviours. This task is useful for building more human-like dialogue agents.
### Discussion of Biases
There is bias in emotion distribution in the MultiWOZ (human-human) and DialMAGE (human-machine) subset of EmoWOZ. The linguistic styles are also different between the two subsets.
As pointed out in [Reevaluating Data Partitioning for Emotion Detection in EmoWOZ](https://arxiv.org/abs/2303.13364), there is also emotion shift in train-dev-test split in the MultiWOZ subset. EmoWOZ keeps the original data split of MultiWOZ, which is suitable for task-oriented dialogue modelling but the emotion distribution in these data splits are different. Further investigations will be needed.
### Other Known Limitations
The emotion distribution is unbalanced where _neutral_, _satisfied_, and _dissatisfied_ make up more than 95% of the labels.
## Additional Information
### Dataset Curators
The collection and annotation of EmoWOZ were conducted by the [Chair for Dialog Systems and Machine Learning at Heinrich Heine Universität Düsseldorf](https://www.cs.hhu.de/lehrstuehle-und-arbeitsgruppen/dialog-systems-and-machine-learning).
### Licensing Information
The EmoWOZ datasetis released under the [CC-BY-NC-4.0 License](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Information
```
@inproceedings{feng-etal-2022-emowoz,
title = "{E}mo{WOZ}: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems",
author = "Feng, Shutong and
Lubis, Nurul and
Geishauser, Christian and
Lin, Hsien-chin and
Heck, Michael and
van Niekerk, Carel and
Gasic, Milica",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.436",
pages = "4096--4113",
abstract = "The ability to recognise emotions lends a conversational artificial intelligence a human touch. While emotions in chit-chat dialogues have received substantial attention, emotions in task-oriented dialogues remain largely unaddressed. This is despite emotions and dialogue success having equally important roles in a natural system. Existing emotion-annotated task-oriented corpora are limited in size, label richness, and public availability, creating a bottleneck for downstream tasks. To lay a foundation for studies on emotions in task-oriented dialogues, we introduce EmoWOZ, a large-scale manually emotion-annotated corpus of task-oriented dialogues. EmoWOZ is based on MultiWOZ, a multi-domain task-oriented dialogue dataset. It contains more than 11K dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. To the best of our knowledge, this is the first large-scale open-source corpus of its kind. We propose a novel emotion labelling scheme, which is tailored to task-oriented dialogues. We report a set of experimental results to show the usability of this corpus for emotion recognition and state tracking in task-oriented dialogues.",
}
``` | [
-0.4977991580963135,
-0.9235718250274658,
0.1953601986169815,
0.12523016333580017,
-0.016370628029108047,
0.019742462784051895,
-0.1778041124343872,
-0.43027907609939575,
0.49609267711639404,
0.5826842784881592,
-1.020980715751648,
-0.9207795262336731,
-0.4071701169013977,
0.25544217228889... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChanceFocus/flare-sm-cikm | ChanceFocus | 2023-06-25T18:16:45Z | 37 | 1 | null | [
"region:us"
] | 2023-06-25T18:16:45Z | 2023-06-25T17:56:12.000Z | 2023-06-25T17:56:12 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 26082681
num_examples: 3396
- name: valid
num_bytes: 3231915
num_examples: 431
- name: test
num_bytes: 8123670
num_examples: 1143
download_size: 19175558
dataset_size: 37438266
---
# Dataset Card for "flare-sm-cikm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7796297669410706,
-0.09088815748691559,
0.17996852099895477,
0.19834072887897491,
-0.1970546692609787,
0.2641538083553314,
0.150661438703537,
-0.1451866775751114,
1.081404447555542,
0.4768994152545929,
-1.0268019437789917,
-0.6402080655097961,
-0.5285605788230896,
-0.2074672430753708,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
leostelon/california-housing | leostelon | 2023-07-14T05:31:59Z | 37 | 0 | null | [
"license:mit",
"region:us"
] | 2023-07-14T05:31:59Z | 2023-07-14T05:31:17.000Z | 2023-07-14T05:31:17 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adityarra07/sub_ATC | adityarra07 | 2023-08-06T05:38:09Z | 37 | 0 | null | [
"region:us"
] | 2023-08-06T05:38:09Z | 2023-08-04T19:13:17.000Z | 2023-08-04T19:13:17 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 136737944.06422067
num_examples: 1000
- name: test
num_bytes: 13673794.406422066
num_examples: 100
download_size: 12473551
dataset_size: 150411738.47064275
---
# Dataset Card for "sub_ATC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6783462762832642,
-0.19930842518806458,
0.22496427595615387,
-0.052309706807136536,
-0.4337063133716583,
0.29643169045448303,
0.4923563301563263,
-0.1350678950548172,
1.0024641752243042,
0.3173524737358093,
-0.9259867668151855,
-0.9172824621200562,
-0.5535721778869629,
-0.21277339756488... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_meta-llama__Llama-2-70b-hf | open-llm-leaderboard | 2023-09-18T06:46:57Z | 37 | 0 | null | [
"region:us"
] | 2023-09-18T06:46:57Z | 2023-08-21T11:06:07.000Z | 2023-08-21T11:06:07 | ---
pretty_name: Evaluation run of meta-llama/Llama-2-70b-hf
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 124 configuration, each one coresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 10 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run (and is used to compute and display the agregated metrics on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_meta-llama__Llama-2-70b-hf\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-18T06:46:44.905361](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-70b-hf/blob/main/results_2023-09-18T06-46-44.905361.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0017827181208053692,\n\
\ \"em_stderr\": 0.00043200973460388544,\n \"f1\": 0.06615562080536916,\n\
\ \"f1_stderr\": 0.0013739852117668813,\n \"acc\": 0.5885312292623206,\n\
\ \"acc_stderr\": 0.011707750309504293\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460388544,\n\
\ \"f1\": 0.06615562080536916,\n \"f1_stderr\": 0.0013739852117668813\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.33965125094768767,\n \
\ \"acc_stderr\": 0.01304504506766526\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8374112075769534,\n \"acc_stderr\": 0.010370455551343326\n\
\ }\n}\n```"
repo_url: https://huggingface.co/meta-llama/Llama-2-70b-hf
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|arc:challenge|25_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|arc:challenge|25_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|arc:challenge|25_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|arc:challenge|25_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_08T23_38_08.931556
path:
- '**/details_harness|drop|3_2023-09-08T23-38-08.931556.parquet'
- split: 2023_09_18T06_46_44.905361
path:
- '**/details_harness|drop|3_2023-09-18T06-46-44.905361.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-18T06-46-44.905361.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_08T23_38_08.931556
path:
- '**/details_harness|gsm8k|5_2023-09-08T23-38-08.931556.parquet'
- split: 2023_09_18T06_46_44.905361
path:
- '**/details_harness|gsm8k|5_2023-09-18T06-46-44.905361.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-18T06-46-44.905361.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hellaswag|10_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hellaswag|10_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hellaswag|10_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hellaswag|10_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_0
data_files:
- split: 2023_08_21T11_06_07.240233
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T11:06:07.240233.parquet'
- split: 2023_08_21T11_28_25.684618
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T11:28:25.684618.parquet'
- split: 2023_08_21T20_33_55.417483
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T20:33:55.417483.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T20:33:55.417483.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T09:05:23.035851.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T10:47:05.866748.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:42:09.433095.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:47:53.141854.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_0
data_files:
- split: 2023_08_21T11_06_07.240233
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T11:06:07.240233.parquet'
- split: 2023_08_21T11_28_25.684618
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T11:28:25.684618.parquet'
- split: 2023_08_21T20_33_55.417483
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T20:33:55.417483.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|0_2023-08-21T20:33:55.417483.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_22T09_05_23.035851
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T09:05:23.035851.parquet'
- split: 2023_08_22T10_47_05.866748
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T10:47:05.866748.parquet'
- split: 2023_08_22T13_42_09.433095
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T13:42:09.433095.parquet'
- split: 2023_08_22T13_47_53.141854
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T13:47:53.141854.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T13:47:53.141854.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_08T23_38_08.931556
path:
- '**/details_harness|winogrande|5_2023-09-08T23-38-08.931556.parquet'
- split: 2023_09_18T06_46_44.905361
path:
- '**/details_harness|winogrande|5_2023-09-18T06-46-44.905361.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-18T06-46-44.905361.parquet'
- config_name: original_mmlu_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:international_law|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:management|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:marketing|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:sociology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:virology|5_2023-08-28T20:36:26.123850.parquet'
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_abstract_algebra_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:abstract_algebra|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_anatomy_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:anatomy|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_astronomy_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:astronomy|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_business_ethics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:business_ethics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_clinical_knowledge_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_biology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_biology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_chemistry_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_computer_science_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_mathematics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_medicine_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_medicine|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_college_physics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:college_physics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_computer_security_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:computer_security|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_conceptual_physics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:conceptual_physics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_econometrics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:econometrics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_electrical_engineering_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:electrical_engineering|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_elementary_mathematics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_formal_logic_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:formal_logic|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_global_facts_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:global_facts|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_biology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_biology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_chemistry_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_computer_science_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_european_history_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_european_history|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_geography_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_geography|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_government_and_politics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_macroeconomics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_mathematics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_microeconomics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_physics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_physics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_psychology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_psychology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_statistics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_statistics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_us_history_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_us_history|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_high_school_world_history_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:high_school_world_history|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_human_aging_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_aging|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_human_sexuality_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:human_sexuality|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_international_law_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:international_law|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_jurisprudence_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:jurisprudence|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_logical_fallacies_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:logical_fallacies|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_machine_learning_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:machine_learning|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_management_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:management|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:management|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_marketing_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:marketing|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_medical_genetics_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:medical_genetics|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_miscellaneous_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:miscellaneous|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_moral_disputes_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_disputes|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_moral_scenarios_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:moral_scenarios|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_nutrition_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:nutrition|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_philosophy_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:philosophy|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_prehistory_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:prehistory|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_professional_accounting_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_accounting|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_professional_law_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_law|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_professional_medicine_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_medicine|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_professional_psychology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:professional_psychology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_public_relations_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:public_relations|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_security_studies_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:security_studies|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_sociology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:sociology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_us_foreign_policy_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_virology_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:virology|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:virology|5_2023-08-28T20:36:26.123850.parquet'
- config_name: original_mmlu_world_religions_5
data_files:
- split: 2023_08_28T20_36_26.123850
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:36:26.123850.parquet'
- split: latest
path:
- '**/details_original|mmlu:world_religions|5_2023-08-28T20:36:26.123850.parquet'
- config_name: results
data_files:
- split: 2023_08_21T11_06_07.240233
path:
- results_2023-08-21T11:06:07.240233.parquet
- split: 2023_08_21T11_28_25.684618
path:
- results_2023-08-21T11:28:25.684618.parquet
- split: 2023_08_21T20_33_55.417483
path:
- results_2023-08-21T20:33:55.417483.parquet
- split: 2023_08_22T09_05_23.035851
path:
- results_2023-08-22T09:05:23.035851.parquet
- split: 2023_08_22T10_47_05.866748
path:
- results_2023-08-22T10:47:05.866748.parquet
- split: 2023_08_22T13_42_09.433095
path:
- results_2023-08-22T13:42:09.433095.parquet
- split: 2023_08_22T13_47_53.141854
path:
- results_2023-08-22T13:47:53.141854.parquet
- split: 2023_08_28T20_36_26.123850
path:
- results_2023-08-28T20:36:26.123850.parquet
- split: 2023_09_08T23_38_08.931556
path:
- results_2023-09-08T23-38-08.931556.parquet
- split: 2023_09_18T06_46_44.905361
path:
- results_2023-09-18T06-46-44.905361.parquet
- split: latest
path:
- results_2023-09-18T06-46-44.905361.parquet
---
# Dataset Card for Evaluation run of meta-llama/Llama-2-70b-hf
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/meta-llama/Llama-2-70b-hf
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 124 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 10 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_meta-llama__Llama-2-70b-hf",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-18T06:46:44.905361](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-70b-hf/blob/main/results_2023-09-18T06-46-44.905361.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388544,
"f1": 0.06615562080536916,
"f1_stderr": 0.0013739852117668813,
"acc": 0.5885312292623206,
"acc_stderr": 0.011707750309504293
},
"harness|drop|3": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388544,
"f1": 0.06615562080536916,
"f1_stderr": 0.0013739852117668813
},
"harness|gsm8k|5": {
"acc": 0.33965125094768767,
"acc_stderr": 0.01304504506766526
},
"harness|winogrande|5": {
"acc": 0.8374112075769534,
"acc_stderr": 0.010370455551343326
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.4141259789466858,
-0.6446269750595093,
0.2684251368045807,
0.2981976568698883,
-0.3002927005290985,
0.24675481021404266,
-0.2954596281051636,
-0.27238085865974426,
0.524583637714386,
0.530259370803833,
-0.7527627348899841,
-0.9601004719734192,
-0.7416244745254517,
0.3151761293411255,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/indonlu_nergrit | SEACrowd | 2023-09-26T12:35:26Z | 37 | 0 | null | [
"language:ind",
"license:mit",
"named-entity-recognition",
"region:us"
] | 2023-09-26T12:35:26Z | 2023-09-26T11:31:21.000Z | 2023-09-26T11:31:21 | ---
license: mit
tags:
- named-entity-recognition
language:
- ind
---
# indonlu_nergrit
This NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation.
The dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and
ORGANIZATION (name of organization).
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
@online{nergrit2019,
title={NERGrit Corpus},
author={NERGrit Developers},
year={2019},
url={https://github.com/grit-id/nergrit-corpus}
}
```
## License
MIT
## Homepage
[https://github.com/grit-id/nergrit-corpus](https://github.com/grit-id/nergrit-corpus)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.5964091420173645,
-0.7585746049880981,
0.1391732543706894,
0.44700169563293457,
-0.23462671041488647,
0.08352619409561157,
-0.3187257945537567,
-0.59515380859375,
0.7005979418754578,
0.4891608953475952,
-0.21560470759868622,
-0.561880350112915,
-0.6223709583282471,
0.34889447689056396,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mindchain/wikitext2 | mindchain | 2023-09-26T19:13:55Z | 37 | 0 | wikitext-2 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"languag... | 2023-09-26T19:13:55Z | 2023-09-26T19:13:23.000Z | 2023-09-26T19:13:23 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- monolingual
paperswithcode_id: wikitext-2
pretty_name: WikiText
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
- config_name: wikitext-103-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1295579
num_examples: 4358
- name: train
num_bytes: 545142639
num_examples: 1801350
- name: validation
num_bytes: 1154755
num_examples: 3760
download_size: 190229076
dataset_size: 547592973
- config_name: wikitext-2-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1270951
num_examples: 4358
- name: train
num_bytes: 10918134
num_examples: 36718
- name: validation
num_bytes: 1134127
num_examples: 3760
download_size: 4475746
dataset_size: 13323212
- config_name: wikitext-103-raw-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1305092
num_examples: 4358
- name: train
num_bytes: 546501673
num_examples: 1801350
- name: validation
num_bytes: 1159292
num_examples: 3760
download_size: 191984949
dataset_size: 548966057
- config_name: wikitext-2-raw-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1305092
num_examples: 4358
- name: train
num_bytes: 11061733
num_examples: 36718
- name: validation
num_bytes: 1159292
num_examples: 3760
download_size: 4721645
dataset_size: 13526117
---
# Dataset Card for "wikitext"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Pointer Sentinel Mixture Models](https://arxiv.org/abs/1609.07843)
- **Point of Contact:** [Stephen Merity](mailto:smerity@salesforce.com)
- **Size of downloaded dataset files:** 391.41 MB
- **Size of the generated dataset:** 1.12 GB
- **Total amount of disk used:** 1.52 GB
### Dataset Summary
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over
110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation
and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models
that can take advantage of long term dependencies.
Each subset comes in two different variants:
- Raw (for character level work) contain the raw tokens, before the addition of the <unk> (unknown) tokens.
- Non-raw (for word level work) contain only the tokens in their vocabulary (wiki.train.tokens, wiki.valid.tokens, and wiki.test.tokens).
The out-of-vocabulary tokens have been replaced with the the <unk> token.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### wikitext-103-raw-v1
- **Size of downloaded dataset files:** 191.98 MB
- **Size of the generated dataset:** 549.42 MB
- **Total amount of disk used:** 741.41 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" The gold dollar or gold one @-@ dollar piece was a coin struck as a regular issue by the United States Bureau of the Mint from..."
}
```
#### wikitext-103-v1
- **Size of downloaded dataset files:** 190.23 MB
- **Size of the generated dataset:** 548.05 MB
- **Total amount of disk used:** 738.27 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..."
}
```
#### wikitext-2-raw-v1
- **Size of downloaded dataset files:** 4.72 MB
- **Size of the generated dataset:** 13.54 MB
- **Total amount of disk used:** 18.26 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" The Sinclair Scientific Programmable was introduced in 1975 , with the same case as the Sinclair Oxford . It was larger than t..."
}
```
#### wikitext-2-v1
- **Size of downloaded dataset files:** 4.48 MB
- **Size of the generated dataset:** 13.34 MB
- **Total amount of disk used:** 17.82 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..."
}
```
### Data Fields
The data fields are the same among all splits.
#### wikitext-103-raw-v1
- `text`: a `string` feature.
#### wikitext-103-v1
- `text`: a `string` feature.
#### wikitext-2-raw-v1
- `text`: a `string` feature.
#### wikitext-2-v1
- `text`: a `string` feature.
### Data Splits
| name | train |validation|test|
|-------------------|------:|---------:|---:|
|wikitext-103-raw-v1|1801350| 3760|4358|
|wikitext-103-v1 |1801350| 3760|4358|
|wikitext-2-raw-v1 | 36718| 3760|4358|
|wikitext-2-v1 | 36718| 3760|4358|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | [
-0.5935544371604919,
-0.5063395500183105,
0.1510479897260666,
0.22923576831817627,
-0.13326697051525116,
-0.0418834425508976,
-0.27003246545791626,
-0.5888042449951172,
0.5724928975105286,
0.44220981001853943,
-0.7605032920837402,
-0.7422682046890259,
-0.5291934013366699,
0.072095096111297... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
classla/ParlaSent | classla | 2023-09-28T13:52:55Z | 37 | 3 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:sl",
"language:en",
"language:cs",
"language:bs",
"language:hr",
"language:sr",
"language:sk",
"license:cc-by-sa-4.0",
"sentiment",
"classification",
"parliament",
"parlament",
"arxiv:2309.09783",
"region:us... | 2023-09-28T13:52:55Z | 2023-09-28T06:20:28.000Z | 2023-09-28T06:20:28 | ---
license: cc-by-sa-4.0
language:
- sl
- en
- cs
- bs
- hr
- sr
- sk
tags:
- sentiment
- classification
- parliament
- parlament
pretty_name: ParlaSent
size_categories:
- 10K<n<100K
configs:
- config_name: EN
data_files: ParlaSent_EN.jsonl
- config_name: BCS
data_files: ParlaSent_BCS.jsonl
- config_name: CZ
data_files: ParlaSent_CZ.jsonl
- config_name: SK
data_files: ParlaSent_SK.jsonl
- config_name: SL
data_files: ParlaSent_SL.jsonl
- config_name: EN_additional_test
data_files: ParlaSent_EN_test.jsonl
- config_name: BCS_additional_test
data_files: ParlaSent_BCS_test.jsonl
task_categories:
- text-classification
---
# The multilingual sentiment dataset of parliamentary debates ParlaSent 1.0
## Dataset Description
- **Repository: [Clarin.si repo](http://hdl.handle.net/11356/1868)**
- **Paper: https://arxiv.org/abs/2309.09783**
### Dataset Summary
This dataset was created and used for sentiment analysis experiments.
The dataset consists of five training datasets and two test sets. The test sets have a _test.jsonl suffix and appear in the Dataset Viewer as _additional_test.
Each test set consists of 2,600 sentences, annotated by one highly trained annotator. Training datasets were internally split into "train", "dev" and "test" portions" for performing language-specific experiments.
The 6-level annotation schema, used by annotators, is the following:
- Positive for sentences that are entirely or predominantly positive
- Negative for sentences that are entirely or predominantly negative
- M_Positive for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the positive sentiment
- M_Negative for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the negative sentiment
- P_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the positive sentiment
- N_Neutral for sentences that only contain non-sentiment-related statements, but still lean more towards the negative sentiment
Dataset is described in detail in our [paper](https://arxiv.org/abs/2309.09783).
### Data Attributes
The attributes in training data are the following:
- sentence - the sentence labeled for sentiment
- country - the country of the parliament the sentence comes form
- annotator1 - first annotator's annotation
- annotator2 - second annotator's annotation
- reconciliation - the final label agreed upon after reconciliation
- label - three level (positive, negative, neutral) label based on the reconciliation label
- document_id - internal identifier of the document the sentence comes form
- sentence_id - internal identifier of the sentence inside the document
- term - the term of the parliament the sentence comes from
- date - the date the sentence was uttered as part of a speech in the parliament
- name - name of the MP giving the speech
- party - the party of the MP
- gender - binary gender of the MP
- birth year - year of birth of the MP
- split - whether the sentence is to be used as a training, development or testing instance in case evaluation is done of the training portion of the dataset
- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech
The attributes in the test data (_test.jsonl files) are the following:
- sentence - the sentence labeled for sentiment
- country - the country of the parliament the sentence comes form
- annotator1 - first (only) annotator's annotation, used as a final annotation
- label - three level (positive, negative, neutral) label based on the annotator1 label
- document_id - internal identifier of the document the sentence comes form
- sentence_id - internal identifier of the sentence inside the document
- term - the term of the parliament the sentence comes from
- date - the date the sentence was uttered as part of a speech in the parliament
- name - name of the MP giving the speech
- party - the party of the MP
- gender - binary gender of the MP
- birth year - year of birth of the MP
- ruling - whether the MP was in a coalition or an opposition at the time of giving the speech
### Citation information
Please quote the following paper:
```
@article{
Mochtak_Rupnik_Ljubešić_2023,
title={The ParlaSent multilingual training dataset for sentiment identification in parliamentary proceedings},
rights={All rights reserved},
url={http://arxiv.org/abs/2309.09783},
abstractNote={Sentiments inherently drive politics. How we receive and process information plays an essential role in political decision-making, shaping our judgment with strategic consequences both on the level of legislators and the masses. If sentiment plays such an important role in politics, how can we study and measure it systematically? The paper presents a new dataset of sentiment-annotated sentences, which are used in a series of experiments focused on training a robust sentiment classifier for parliamentary proceedings. The paper also introduces the first domain-specific LLM for political science applications additionally pre-trained on 1.72 billion domain-specific words from proceedings of 27 European parliaments. We present experiments demonstrating how the additional pre-training of LLM on parliamentary data can significantly improve the model downstream performance on the domain-specific tasks, in our case, sentiment detection in parliamentary proceedings. We further show that multilingual models perform very well on unseen languages and that additional data from other languages significantly improves the target parliament’s results. The paper makes an important contribution to multiple domains of social sciences and bridges them with computer science and computational linguistics. Lastly, it sets up a more robust approach to sentiment analysis of political texts in general, which allows scholars to study political sentiment from a comparative perspective using standardized tools and techniques.},
note={arXiv:2309.09783 [cs]},
number={arXiv:2309.09783},
publisher={arXiv},
author={Mochtak, Michal and Rupnik, Peter and Ljubešić, Nikola},
year={2023},
month={Sep},
language={en}
}
``` | [
-0.6473110914230347,
-0.5328433513641357,
0.21009336411952972,
0.39460939168930054,
-0.5622185468673706,
-0.11165337264537811,
-0.46476036310195923,
0.021939240396022797,
0.26296570897102356,
0.37615278363227844,
-0.48433083295822144,
-0.613724410533905,
-0.7874908447265625,
0.213594987988... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
peterschmidt85/samsum | peterschmidt85 | 2023-09-30T17:06:11Z | 37 | 0 | null | [
"region:us"
] | 2023-09-30T17:06:11Z | 2023-09-30T17:05:57.000Z | 2023-09-30T17:05:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10789305
num_examples: 14732
download_size: 5844166
dataset_size: 10789305
---
# Dataset Card for "samsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4752565920352936,
0.027720851823687553,
0.33996689319610596,
0.16924628615379333,
-0.40847882628440857,
-0.12450622022151947,
0.3077661693096161,
-0.1148710548877716,
1.1241494417190552,
0.5338426232337952,
-0.9013544917106628,
-0.8522741198539734,
-0.7494418621063232,
-0.24219438433647... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tanvirsrbd1/dataset1_two_app_annotated1 | tanvirsrbd1 | 2023-10-01T05:32:23Z | 37 | 0 | null | [
"region:us"
] | 2023-10-01T05:32:23Z | 2023-10-01T05:32:15.000Z | 2023-10-01T05:32:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: xml
dtype: string
- name: html
dtype: string
- name: response
dtype: string
- name: annotated
dtype: string
splits:
- name: train
num_bytes: 1894327.1470588236
num_examples: 66
download_size: 265891
dataset_size: 1894327.1470588236
---
# Dataset Card for "dataset1_two_app_annotated1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6008231043815613,
-0.40694278478622437,
0.03168262541294098,
0.5099465250968933,
-0.23362702131271362,
0.01282556727528572,
0.3791063725948334,
-0.3128705322742462,
0.9656558036804199,
0.6917220950126648,
-0.7988005876541138,
-0.6674509644508362,
-0.5649512410163879,
-0.2665532827377319... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
englert-m/reconstruction | englert-m | 2023-10-30T12:47:01Z | 37 | 0 | null | [
"region:us"
] | 2023-10-30T12:47:01Z | 2023-10-10T03:37:34.000Z | 2023-10-10T03:37:34 | ---
dataset_info:
features:
- name: orig
dtype: uint32
- name: corrupted
dtype: image
- name: count
dtype: uint32
- name: xflip
dtype: int64
- name: yflip
dtype: int64
- name: scale
dtype: float32
- name: rotate_frac
dtype: float32
- name: aniso_w
dtype: float32
- name: aniso_r
dtype: float32
- name: translate_frac
sequence: float32
length: 2
splits:
- name: train
num_bytes: 103478894006.625
num_examples: 40695787
download_size: 0
dataset_size: 103478894006.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "reconstruction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6658292412757874,
-0.22688739001750946,
0.3488353490829468,
0.09158925712108612,
-0.20757819712162018,
0.10932254791259766,
0.4877989590167999,
-0.27343979477882385,
0.9920331239700317,
0.6649760007858276,
-0.7650396823883057,
-0.6095455288887024,
-0.6073899865150452,
-0.261273115873336... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
paul-w-qs/churn_prediction | paul-w-qs | 2023-10-13T11:13:38Z | 37 | 0 | null | [
"region:us"
] | 2023-10-13T11:13:38Z | 2023-10-13T11:13:26.000Z | 2023-10-13T11:13:26 | ---
dataset_info:
features:
- name: SALESFORCEACCOUNTID
dtype: string
- name: ACCOUNTING_MONTH
dtype: string
- name: CHURN
dtype: int64
- name: DOWNSELL
dtype: int64
- name: RENEWAL_MONTH
dtype: string
- name: CHURN_SUM
dtype: float64
- name: DOWNSELL_SUM
dtype: float64
- name: CONTRACT_START_DATE
dtype: string
- name: CONTRACT_END_DATE
dtype: string
- name: REGION
dtype: string
- name: TENURE_MONTHS
dtype: int64
- name: MONTHS_UNTIL_EVENT
dtype: int64
- name: DNB_GLOBAL_SALES_REVENUE
dtype: float64
- name: DNB_GLOBAL_EMPLOYEE_COUNT
dtype: int64
- name: DETECTEDSEATSCOUNT
dtype: float64
- name: PRODUCT_ONE
dtype: int64
- name: NUM_PRODUCTS_DAYS_LATE_PREV_90
dtype: int64
- name: LICENSINGSPECIALIST_CHANGE
dtype: int64
- name: CAR_HEALTH_CHECK
dtype: int64
- name: CROSS_SELL_RECENCY
dtype: int64
- name: SEATS_DOWNSELL_RECENCY
dtype: int64
- name: PRODUCT_TWO
dtype: int64
- name: PCT_PRODUCT_THREE_ENABLED
dtype: int64
- name: OTHER
dtype: int64
- name: CUSTOMER_BEHAVIOUR_ONE
dtype: int64
- name: CUSTOMER_BEHAVIOUR_TWO
dtype: int64
- name: PCT_PRODUCT_FOUR_ENABLED
dtype: int64
- name: PRODUCT_FIVE
dtype: int64
- name: PCT_PRODUCT_FIVE_ENABLED
dtype: int64
- name: MAX_SUPPORT_CASE_DAYSTOCLOSE
dtype: int64
- name: SUM_P4FLAG
dtype: int64
- name: PRODUCT_SIX
dtype: int64
- name: CUSTOMER_BEHAVIOUR_THREE
dtype: int64
- name: PRODUCT_THREE
dtype: int64
- name: SALESREP_CHANGE
dtype: int64
- name: SURVEY_AVG_CXI_SCORE
dtype: float64
- name: PCT_PRODUCT_FOUR_BEST_PRACTICE
dtype: int64
- name: EO_ATTENDED
dtype: int64
- name: PRODUCT_SEVEN
dtype: int64
- name: CUSTOMER_BEHAVIOUR_FOUR
dtype: int64
- name: SURVEY_AVG_NPS_SCORE
dtype: float64
- name: CUSTOMER_BEHAVIOUR_FIVE
dtype: int64
- name: PCT_PRODUCT_TWO_ENABLED
dtype: int64
- name: PCT_PRODUCT_SIX_ENABLED
dtype: int64
- name: CUSTOMER_BEHAVIOUR_SIX
dtype: int64
- name: PRODUCT_EIGHT
dtype: int64
- name: PRODUCT_NINE
dtype: int64
- name: PRODUCT_TEN
dtype: int64
- name: CUSTOMER_BEHAVIOUR_SEVEN
dtype: int64
- name: PRODUCT_ELEVEN
dtype: int64
- name: PRODUCT_TWELVE
dtype: int64
- name: PRODUCT_THIRTEEN
dtype: int64
- name: CUSTOMER_BEHAVIOUR_EIGHT
dtype: int64
- name: PRODUCT_FOURTEEN
dtype: int64
- name: PRODUCT_FIFTEEN
dtype: int64
- name: PRODUCT_DOWNSELL_RECENCY
dtype: int64
- name: UPSELLMANAGER_CHANGE
dtype: int64
- name: MAX_SUPPORT_CASE_TIMETOFIRSTRESPONSE
dtype: int64
- name: SURVEY_AVG_CASE_MOOD_SCORE
dtype: float64
- name: PCT_PRODUCT_THREE_BEST_PRACTICE
dtype: int64
- name: CASES_CREATED_FIRST_30_AFTER_IMPLEMENTATION
dtype: int64
- name: PRODUCT_FOUR
dtype: int64
- name: PCT_PRODUCT_TWO_BEST_PRACTICE
dtype: int64
- name: IMPLEMENTATION_MONTHS_RUNNING_TOTAL
dtype: int64
- name: CAR_CHURN_OR_RISK_DISCUSSION
dtype: int64
- name: PRODUCT_SIXTEEN
dtype: int64
- name: PRODUCT_SEVENTEEN
dtype: int64
- name: RATIO_SEATS_ACTIVE
dtype: float64
- name: MONTHLY_PRODUCT_COUNT
dtype: int64
- name: ARR
dtype: float64
- name: SUPPORT_CASE_NUMBEROFSLABREACHES
dtype: int64
- name: AVG_SEATS
dtype: int64
- name: CUSTOMER_BEHAVIOUR_NINE
dtype: int64
- name: CONTRACT_LENGTH
dtype: int64
- name: CUSTOMER_BEHAVIOUR_TEN
dtype: int64
- name: PRODUCT_EIGHTEEN
dtype: int64
- name: CUSTOMER_BEHAVIOUR_ELEVEN
dtype: int64
- name: SEATS_UPSELL_RECENCY
dtype: int64
- name: PRODUCT_NINETEEN
dtype: int64
- name: ACCOUNTMANAGER_CHANGE
dtype: int64
- name: PCT_PRODUCT_NINE_ENABLED
dtype: int64
- name: PRODUCT_TWENTY
dtype: int64
- name: PRODUCT_TWENTYONE
dtype: int64
- name: PRODUCT_TWENTYTWO
dtype: int64
- name: PRODUCT_TWENTYTHREE
dtype: int64
- name: SUM_P0FLAG
dtype: int64
- name: SUM_P1FLAG
dtype: int64
- name: SUM_P2FLAG
dtype: int64
- name: SUM_P3FLAG
dtype: int64
- name: BACKLOG
dtype: int64
- name: AVG_SUPPORT_CASE_PRIORITY_SCORE
dtype: float64
- name: COMPETITOR_SEATS
dtype: int64
- name: RPU
dtype: float64
- name: SECTOR
dtype: string
- name: P0123_FLAGS
dtype: int64
- name: ARR_DIV_SEATS
dtype: float64
splits:
- name: train
num_bytes: 85464346
num_examples: 100000
download_size: 7102198
dataset_size: 85464346
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "churn_prediction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3115996718406677,
-0.2167454957962036,
0.10305514186620712,
0.0933983102440834,
-0.17160452902317047,
0.005322102922946215,
0.4864489436149597,
-0.1774192899465561,
0.7974937558174133,
0.6358152031898499,
-1.0331895351409912,
-0.529109537601471,
-0.39405906200408936,
-0.6515671610832214... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
salma-remyx/hf_objdet_test | salma-remyx | 2023-10-29T23:11:36Z | 37 | 0 | null | [
"region:us"
] | 2023-10-29T23:11:36Z | 2023-10-28T21:10:22.000Z | 2023-10-28T21:10:22 | ---
dataset_info:
features:
- name: image
dtype: image
- name: objects
struct:
- name: bbox
sequence:
sequence: float64
- name: categories
sequence: int64
splits:
- name: train
num_bytes: 7545187.0
num_examples: 16
download_size: 7548342
dataset_size: 7545187.0
---
# Dataset Card for "hf_objdet_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8273977637290955,
-0.6596131920814514,
0.16598834097385406,
0.2848440706729889,
-0.09503060579299927,
-0.055322155356407166,
0.2924360930919647,
-0.1237613633275032,
0.6998368501663208,
0.5455424785614014,
-0.7192995548248291,
-0.6904767751693726,
-0.42287135124206543,
-0.07518196851015... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
geraldOslo/prompt_answer_dataset_for_norwegian_radiation_protection_in_dentistry | geraldOslo | 2023-11-28T08:50:10Z | 37 | 0 | null | [
"size_categories:1K<n<10K",
"language:no",
"license:cc-by-2.0",
"dentistry",
"physics",
"radiation protection",
"doi:10.57967/hf/1291",
"region:us"
] | 2023-11-28T08:50:10Z | 2023-10-30T08:56:56.000Z | 2023-10-30T08:56:56 | ---
license: cc-by-2.0
language:
- 'no'
tags:
- dentistry
- physics
- radiation protection
pretty_name: Question/answer connected to radiation protection in dentistry
size_categories:
- 1K<n<10K
configs:
- config_name: tab
data_files: "question_answer_pairs_radiation_protection.csv"
sep: ";"
---
# Contents
## Overview
This dataset comprises prompt/answer pairs related to the curriculum for Norwegian dentistry and dental hygiene students, specifically focusing on the subjects of radiation physics, radiation protection, and radiological technology.
## Data source
The prompt/answer pairs in this dataset were generated using commercially available Large Language Models (LLMs), including OpenAI GPT-4 and Anthropic Claude 2. These pairs were generated based on the analysis of documents provided as input to these LLMs.
Following this, some irrelevant pairs were deleted, some were edited for accuracy and clarity, and all pairs were proofread for errors.
### Source documents
The dataset was created using a variety of source documents, primarily encompassing:
- [Lov om strålevern og bruk av stråling (strålevernloven) (2000)](https://lovdata.no/dokument/SF/forskrift/2016-12-16-1659)
- [Forskrift om strålevern og bruk av stråling (strålevernforskriften) (2016)](https://lovdata.no/dokument/SF/forskrift/2016-12-16-1659)
- [DSA: Medisinsk strålebruk (web page) (2023)](https://dsa.no/medisinsk-stralebruk)
- [DSA: Veileder 14](https://dsa.no/publikasjoner/veileder-14-veileder-om-stralebruk-innen-odontologi/Veileder_14_odontologi.pdf)
- [DSA: StrålevernRapport • 2014:2 Strålebruk i Norge](https://dsa.no/publikasjoner/stralevernrapport-2-2014-stralebruk-i-norge/StralevernRapport_02-2014_Str%C3%A5lebruk%20i%20Norge.pdf)
- [DSA: StrålevernRapport 2015:12 Stråledoser til befolkningen](https://dsa.no/publikasjoner/stralevernrapport-12-2015-straledoser-til-befolkningen/StralevernRapport_12-15_Str%C3%A5ledoser_til_befolkningen-.pdf)
- [DSA: Veileder til forskrift om strålevern og bruk av stråling Veileder Nummer 5 Revidert mai 2023](https://dsa.no/publikasjoner/_/attachment/inline/70e8470f-6c36-46fc-9e97-c27298859d66:22ab78bd659798c58cc3ce55c07dbb9aad9b44a0/Veileder%205_rev-mai2023.pdf)
- [Gerald Torgersen: Strålingsfysikk, strålevern og radiologisk teknologi for tannpleie- og tannlegestudenter (online course) (2023)](https://uio.instructure.com/courses/19845)
- Own teaching material and notes
DSA is the The Norwegian Radiation and Nuclear Safety Authority
# Purpose
The dataset is generated for fine-tuning of open source LLMs.
# Format
The dataset is a UTF-8 formatted ";"-separated csv-file. There are two columns: prompt, prediction
# Warning
The dataset is provided for use on own responsibility. Please give feedback if you find a serious error.
# Todo
- add more relevant prompt/response pairs
- further proofreading and adjustments | [
-0.26200437545776367,
-0.5903058648109436,
0.5709127187728882,
0.060528870671987534,
-0.32316824793815613,
-0.4128700792789459,
-0.001529769622720778,
-0.3032413721084595,
0.24392098188400269,
0.5181010365486145,
-0.6319612264633179,
-0.7988431453704834,
-0.3374800682067871,
0.381165325641... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Elijahbodden/SMS-v2 | Elijahbodden | 2023-11-02T01:50:40Z | 37 | 0 | null | [
"license:unlicense",
"region:us"
] | 2023-11-02T01:50:40Z | 2023-11-01T23:51:16.000Z | 2023-11-01T23:51:16 | ---
license: unlicense
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sunhaozhepy/sst_keywords_embeddings | sunhaozhepy | 2023-11-02T08:25:35Z | 37 | 0 | null | [
"region:us"
] | 2023-11-02T08:25:35Z | 2023-11-02T08:25:28.000Z | 2023-11-02T08:25:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: float32
- name: tokens
dtype: string
- name: tree
dtype: string
- name: keywords
dtype: string
- name: keywords_embeddings
sequence: float32
splits:
- name: train
num_bytes: 29449976
num_examples: 8544
- name: validation
num_bytes: 3798043
num_examples: 1101
- name: test
num_bytes: 7617749
num_examples: 2210
download_size: 47140795
dataset_size: 40865768
---
# Dataset Card for "sst_keywords_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2685950696468353,
-0.3196910321712494,
0.332637220621109,
0.005019654985517263,
-0.6036988496780396,
0.30348947644233704,
0.007247497793287039,
0.30415529012680054,
0.8799924850463867,
0.5287762880325317,
-0.5147724747657776,
-1.0425857305526733,
-0.783501923084259,
-0.3201695382595062,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/SlimOrcaRU | dim | 2023-11-07T23:49:06Z | 37 | 0 | null | [
"region:us"
] | 2023-11-07T23:49:06Z | 2023-11-07T23:48:48.000Z | 2023-11-07T23:48:48 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: value_ru
dtype: string
- name: weight
dtype: float64
- name: key
dtype: int64
splits:
- name: train
num_bytes: 183635644
num_examples: 47536
download_size: 83293621
dataset_size: 183635644
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SlimOrcaRU"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7083805203437805,
-0.21905899047851562,
0.1670508086681366,
0.0223630890250206,
-0.24040831625461578,
-0.0766119733452797,
0.027357805520296097,
-0.14455100893974304,
1.0723328590393066,
0.3912656605243683,
-0.936454713344574,
-0.6383345723152161,
-0.4604347050189972,
-0.220765367150306... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AiresPucrs/stanford-encyclopedia-philosophy | AiresPucrs | 2023-11-17T10:07:54Z | 37 | 1 | null | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-17T10:07:54Z | 2023-11-10T09:42:36.000Z | 2023-11-10T09:42:36 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
---
**Overview**
The Stanford Encyclopedia of Philosophy (SEP) is a dynamic reference work, including over 1,770 entries written by top scholars in the field of philosophy.
This dataset contains the full text of all articles contained within the SEP. All data is available as a CSV file and a folder of .txt files. The CSV files possess information related to the original page (URL), the subject of the page (Category), and the text of the page (Text). This dataset can be used for NLP applications like text mining, text classification, and text generation, among others.
**Dataset**
This dataset contains 182531 text entries related to 1,770 different philosophical subjects.
All entries can be found in the CSV file (stanford_encyclopedia_philosophy.csv), or as separate .txt files in the dataset folder.
These txt files are separated by category (Socrates, Plato, Aesthetic) in different folders.
**Vocabulary**
In addition to the text data, we also provide an already-tokenized bag of words/vocabulary of different sizes (5000, 10000, 15000, 20000, 25000, 200000)
together with the full tokenized vocabulary (vocab_SEP) in the format of a .txt file.
These vocabularies were computed using the TextVectorization from Keras (Tensorflow 2.10.1).
**Dataset License**
The Stanford-encyclopedia-philosophy Dataset is licensed under the Apache License, Version 2.0. See the LICENSE file for more details.
**Disclaimer**
This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose.
The creators and contributors of this dataset are not liable for any damages or losses arising from its use.
Please review and comply with the licenses and terms of the original datasets before use. | [
-0.36364731192588806,
-0.4695512056350708,
0.1901412308216095,
0.1799965500831604,
-0.6137421131134033,
-0.07527583837509155,
-0.4111277461051941,
-0.33762356638908386,
0.4742763936519623,
0.9033031463623047,
-0.5512751340866089,
-0.6801737546920776,
-0.5584300756454468,
0.1738602668046951... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlignmentLab-AI/gpt4vsent | AlignmentLab-AI | 2023-11-13T17:33:58Z | 37 | 0 | null | [
"region:us"
] | 2023-11-13T17:33:58Z | 2023-11-13T11:32:32.000Z | 2023-11-13T11:32:32 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
librarian-bots/collection_cloner-usage-stats | librarian-bots | 2023-11-22T20:25:30Z | 37 | 0 | null | [
"region:us"
] | 2023-11-22T20:25:30Z | 2023-11-14T14:30:53.000Z | 2023-11-14T14:30:53 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Supersaiyan1729/instruct_dataset_mcq | Supersaiyan1729 | 2023-11-14T21:23:22Z | 37 | 0 | null | [
"region:us"
] | 2023-11-14T21:23:22Z | 2023-11-14T21:23:09.000Z | 2023-11-14T21:23:09 | ---
dataset_info:
features:
- name: input_prompt
dtype: string
- name: input_output_prompt
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 69681099
num_examples: 48302
download_size: 29758222
dataset_size: 69681099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "instruct_dataset_mcq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6513218879699707,
-0.25142720341682434,
0.2311854362487793,
-0.005029879044741392,
-0.028920840471982956,
0.20391567051410675,
0.5273850560188293,
0.076096311211586,
0.7227780818939209,
0.6391576528549194,
-0.8185235857963562,
-0.745979368686676,
-0.4091850221157074,
-0.2328103631734848... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AzerOuerghi/test | AzerOuerghi | 2023-11-16T15:15:37Z | 37 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-16T15:15:37Z | 2023-11-15T15:54:10.000Z | 2023-11-15T15:54:10 | ---
license: unknown
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Definite/my_precious | Definite | 2023-11-20T10:52:05Z | 37 | 0 | null | [
"region:us"
] | 2023-11-20T10:52:05Z | 2023-11-16T15:31:15.000Z | 2023-11-16T15:31:15 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': jms
'1': shin
'2': winner
'3': biblei
splits:
- name: train
num_bytes: 2691826.25
num_examples: 5000
- name: eval
num_bytes: 1615095.75
num_examples: 3000
- name: test
num_bytes: 1115019
num_examples: 2000
download_size: 3023909
dataset_size: 5421941.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
- split: test
path: data/test-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Patt/HellaSwag_thai | Patt | 2023-11-16T16:34:13Z | 37 | 0 | null | [
"language:th",
"language:en",
"arxiv:1907.04307",
"region:us"
] | 2023-11-16T16:34:13Z | 2023-11-16T16:29:16.000Z | 2023-11-16T16:29:16 | ---
language:
- th
- en
---
# Dataset Card for HellaSwag_TH_drop
### Dataset Description
This dataset is Thai translated version of [hellaswag](https://huggingface.co/datasets/hellaswag) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
### Languages
- EN
- TH | [
-0.2988744378089905,
-0.4602753818035126,
0.04785594716668129,
0.3785865008831024,
-0.822783350944519,
0.004872013349086046,
-0.2908240556716919,
-0.18882639706134796,
0.30757221579551697,
0.6095991730690002,
-0.979199230670929,
-1.170638084411621,
-0.7600088715553284,
0.28825345635414124,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DBQ/Gucci.Product.prices.United.States | DBQ | 2023-11-18T22:15:36Z | 37 | 0 | null | [
"region:us"
] | 2023-11-18T22:15:36Z | 2023-11-18T22:15:31.000Z | 2023-11-18T22:15:31 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: website_name
dtype: string
- name: competence_date
dtype: string
- name: country_code
dtype: string
- name: currency_code
dtype: string
- name: brand
dtype: string
- name: category1_code
dtype: string
- name: category2_code
dtype: string
- name: category3_code
dtype: string
- name: product_code
dtype: string
- name: title
dtype: string
- name: itemurl
dtype: string
- name: imageurl
dtype: string
- name: full_price
dtype: float64
- name: price
dtype: float64
- name: full_price_eur
dtype: float64
- name: price_eur
dtype: float64
- name: flg_discount
dtype: int64
splits:
- name: train
num_bytes: 2746363
num_examples: 5764
download_size: 807752
dataset_size: 2746363
---
# Gucci web scraped data
## About the website
The **luxury fashion industry** in the **United States** is a rapidly evolving market, with a significant presence of globally renowned brands such as **Gucci**. In the recent years, the industry has channelled its efforts towards online platforms, leading to an increase in **ecommerce** activity. This transition has played a crucial role in boosting the accessibility and convenience of purchasing high-end fashion products. Of particular interest is the **Ecommerce product-list page (PLP) data on Gucci** in the United States. This dataset provides invaluable insights into online fashion consumer behavior, purchase patterns, and product preference trends.
## Link to **dataset**
[United States - Gucci - Product-level price list dataset](https://www.databoutique.com/buy-data-page/Gucci%20Product-prices%20United%20States/r/rec9JLoIyDa81dIck)
| [
-0.23121987283229828,
-0.6485145688056946,
0.065865159034729,
0.3444552719593048,
-0.34162381291389465,
0.4154752194881439,
0.1871473342180252,
-0.6583582162857056,
0.21755704283714294,
-0.02774137631058693,
-0.6984759569168091,
-0.6983729600906372,
0.09103710949420929,
-0.2125237733125686... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
danielz01/fMoW | danielz01 | 2023-11-20T10:36:50Z | 37 | 0 | null | [
"region:us"
] | 2023-11-20T10:36:50Z | 2023-11-20T08:29:33.000Z | 2023-11-20T08:29:33 | ---
configs:
- config_name: WILDS
data_files:
- split: test
path: WILDS/test-*
- split: id_test
path: WILDS/id_test-*
- split: val
path: WILDS/val-*
- split: id_val
path: WILDS/id_val-*
- split: train
path: WILDS/train-*
dataset_info:
config_name: WILDS
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airport
'1': airport_hangar
'2': airport_terminal
'3': amusement_park
'4': aquaculture
'5': archaeological_site
'6': barn
'7': border_checkpoint
'8': burial_site
'9': car_dealership
'10': construction_site
'11': crop_field
'12': dam
'13': debris_or_rubble
'14': educational_institution
'15': electric_substation
'16': factory_or_powerplant
'17': fire_station
'18': flooded_road
'19': fountain
'20': gas_station
'21': golf_course
'22': ground_transportation_station
'23': helipad
'24': hospital
'25': impoverished_settlement
'26': interchange
'27': lake_or_pond
'28': lighthouse
'29': military_facility
'30': multi-unit_residential
'31': nuclear_powerplant
'32': office_building
'33': oil_or_gas_facility
'34': park
'35': parking_lot_or_garage
'36': place_of_worship
'37': police_station
'38': port
'39': prison
'40': race_track
'41': railway_bridge
'42': recreational_facility
'43': road_bridge
'44': runway
'45': shipyard
'46': shopping_mall
'47': single-unit_residential
'48': smokestack
'49': solar_farm
'50': space_facility
'51': stadium
'52': storage_tank
'53': surface_mine
'54': swimming_pool
'55': toll_booth
'56': tower
'57': tunnel_opening
'58': waste_disposal
'59': water_treatment_facility
'60': wind_farm
'61': zoo
- name: domain_labels
sequence: int64
- name: domain_labels_readable
struct:
- name: from_source_domain
dtype: bool
- name: region
dtype: string
- name: y
dtype: 'null'
- name: year
dtype: int64
- name: split
dtype: string
- name: img_filename
dtype: string
- name: img_path
dtype: string
- name: spatial_reference
dtype: string
- name: epsg
dtype: int64
- name: category
dtype: string
- name: visible
dtype: bool
- name: img_width
dtype: int64
- name: img_height
dtype: int64
- name: country_code
dtype: string
- name: cloud_cover
dtype: int64
- name: timestamp
dtype: string
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: region
dtype: int64
- name: y
dtype: int64
- name: year
dtype: float64
splits:
- name: test
num_bytes: 2283079843.392
num_examples: 22108
- name: id_test
num_bytes: 1168174637.125
num_examples: 11327
- name: val
num_bytes: 2052331276.625
num_examples: 19915
- name: id_val
num_bytes: 1191085782.625
num_examples: 11483
- name: train
num_bytes: 7946709118.125
num_examples: 76863
download_size: 14612709837
dataset_size: 14641380657.892
---
# Dataset Card for "fMoW"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5719138383865356,
-0.30872029066085815,
0.275766521692276,
0.23120348155498505,
-0.18339018523693085,
-0.18013568222522736,
0.4337959885597229,
-0.4388083219528198,
0.7764554023742676,
0.6162562966346741,
-1.1111435890197754,
-0.782001793384552,
-0.6095198392868042,
-0.09933020919561386... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pszemraj/midjourney-messages-cleaned | pszemraj | 2023-11-22T08:46:09Z | 37 | 0 | null | [
"task_categories:text-generation",
"source_datasets:vivym/midjourney-messages",
"language:en",
"license:apache-2.0",
"midjourney",
"region:us"
] | 2023-11-22T08:46:09Z | 2023-11-21T16:46:17.000Z | 2023-11-21T16:46:17 | ---
language:
- en
license: apache-2.0
source_datasets: vivym/midjourney-messages
task_categories:
- text-generation
dataset_info:
- config_name: deduped
features:
- name: id
dtype: string
- name: channel_id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2538669745.0
num_examples: 14828769
download_size: 1585207687
dataset_size: 2538669745.0
- config_name: default
features:
- name: id
dtype: string
- name: channel_id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3575844717.3610477
num_examples: 19716685
download_size: 1514418407
dataset_size: 3575844717.3610477
configs:
- config_name: deduped
data_files:
- split: train
path: deduped/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- midjourney
---
# midjourney-messages-cleaned
This is [vivym/midjourney-messages](https://huggingface.co/datasets/vivym/midjourney-messages) but with the following cleaning steps:
- remove most columns (keep `id` columns for reference vs. original)
- Apply `clean-text` to all rows (_keep casing_)
- rename `content` to `text` (ffs)
- remove intermediate ID/tag (???) in angle brackets at the end, remove double asterisks `**`
- remove exact duplicate rows
## dataset structure
overall:
```python
DatasetDict({
train: Dataset({
features: ['id', 'channel_id', 'text'],
num_rows: 19738964
})
})
```
A single example looks like this:
```python
random.choice(dataset['train'])
{'id': '1108635049391308879',
'channel_id': '1008571088919343124',
'text': 'Warhammer 40k Chaos Space Marine with pink Armor and a guitar'}
```
## details
585M GPT-4 tiktoken tokens.
```
token_count
count 1.971668e+07
mean 2.971651e+01
std 3.875208e+01
min 1.000000e+00
25% 1.000000e+01
50% 1.900000e+01
75% 3.400000e+01
max 2.077000e+03
```
| [
-0.3759571313858032,
-0.545918881893158,
0.16335272789001465,
0.08293996006250381,
-0.6055248379707336,
-0.21843421459197998,
-0.2473536878824234,
0.16050219535827637,
0.621648371219635,
0.6383476257324219,
-0.7683772444725037,
-0.7432538270950317,
-0.540643572807312,
-0.06089417263865471,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eswardivi/Tamil_MSA_Audio_Text_Chunked | eswardivi | 2023-11-22T09:02:54Z | 37 | 0 | null | [
"region:us"
] | 2023-11-22T09:02:54Z | 2023-11-22T09:02:49.000Z | 2023-11-22T09:02:49 | ---
dataset_info:
features:
- name: Audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: FilePath
dtype: string
- name: Text
dtype: string
splits:
- name: train
num_bytes: 39958223.0
num_examples: 128
download_size: 39793452
dataset_size: 39958223.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sankettgorey/one_layout | sankettgorey | 2023-11-23T10:07:01Z | 37 | 0 | null | [
"region:us"
] | 2023-11-23T10:07:01Z | 2023-11-23T10:06:56.000Z | 2023-11-23T10:06:56 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 19925086.76470588
num_examples: 81
- name: test
num_bytes: 2691917.6470588236
num_examples: 11
- name: validation
num_bytes: 2439834.588235294
num_examples: 10
download_size: 20649181
dataset_size: 25056838.999999996
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlplabtdtu/ragas | nlplabtdtu | 2023-11-27T11:57:52Z | 37 | 0 | null | [
"region:us"
] | 2023-11-27T11:57:52Z | 2023-11-27T11:57:27.000Z | 2023-11-27T11:57:27 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DDSC/dkhate | DDSC | 2023-05-17T06:19:43Z | 36 | 4 | dkhate | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"arxiv:1908.04531",
"region:us"
... | 2023-05-17T06:19:43Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: DKHate
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
extra_gated_prompt: "Content warning: This dataset contains harmful text (abusive language, hate speech)."
paperswithcode_id: dkhate
---
# Dataset Card for DKHate
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/offensivelanguageandhatespeechdetectionfordanish/](https://stromberg.ai/publication/offensivelanguageandhatespeechdetectionfordanish/)
- **Repository:** [https://github.com/StrombergNLP/dkhate](https://github.com/StrombergNLP/dkhate)
- **Paper:** [https://https://aclanthology.org/2020.lrec-1.430/](aclanthology.org/2020.lrec-1.430/), [https://arxiv.org/abs/1908.04531](https://arxiv.org/abs/1908.04531)
- **Direct Download**: [https://figshare.com/articles/dataset/Danish_Hate_Speech_Abusive_Language_data/12220805](https://figshare.com/articles/dataset/Danish_Hate_Speech_Abusive_Language_data/12220805)
- **Point of Contact:** [Leon Derczynski](mailto:leod@itu.dk)
### Dataset Summary
This dataset consists of anonymised Danish Twitter data that has been annotated for hate speech. All credits go to the authors of the following paper, who created the dataset:
[Offensive Language and Hate Speech Detection for Danish](https://aclanthology.org/2020.lrec-1.430) (Sigurbergsson & Derczynski, LREC 2020)
### Supported Tasks and Leaderboards
This dataset is suitable for hate speech detection.
* PwC leaderboard for Task A: [Hate Speech Detection on DKhate](https://paperswithcode.com/sota/hate-speech-detection-on-dkhate)
### Languages
This dataset is in Danish.
## Dataset Structure
### Data Instances
Every entry in the dataset has a tweet and an associated label.
### Data Fields
An entry in the dataset consists of the following fields:
- `text` (`str`): The tweet content.
- `label` (`str`): The label of the `text`. Can be either "OFF" or "NOT", being offensive and not offensive, respectively.
### Data Splits
A `train` and `test` split is available, which are identical to the original splits. There are 2,960 tweets in the training split and 329 in the test split.
## Additional Information
### Dataset Curators
The curation of the dataset is solely due to the authors of [the original paper](https://aclanthology.org/2020.lrec-1.430/): Gudbjartur Ingi Sigurbergsson and Leon Derczynski.
### Licensing Information
The dataset is released under the CC BY 4.0 license.
### Citation Information
```
@inproceedings{sigurbergsson2020offensive,
title={Offensive Language and Hate Speech Detection for Danish},
author={Sigurbergsson, Gudbjartur Ingi and Derczynski, Leon},
booktitle={Proceedings of the 12th Language Resources and Evaluation Conference},
pages={3498--3508},
year={2020}
}
```
### Contributions
Thanks to [@saattrupdan](https://github.com/saattrupdan) for adding this dataset to the Hugging Face Hub. | [
-0.3515983521938324,
-0.6087625622749329,
0.014093593694269657,
0.15103985369205475,
-0.42702245712280273,
0.2212691456079483,
-0.4189210534095764,
-0.5288644433021545,
0.3139467239379883,
0.14864756166934967,
-0.43544819951057434,
-0.9322414994239807,
-0.9421548247337341,
0.07345855236053... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
philschmid/germeval18 | philschmid | 2022-02-28T17:14:55Z | 36 | 4 | null | [
"region:us"
] | 2022-02-28T17:14:55Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crystina-z/no-nonself-mrtydi-corpus | crystina-z | 2022-03-10T22:08:19Z | 36 | 0 | null | [
"region:us"
] | 2022-03-10T22:08:19Z | 2022-03-09T01:03:48.000Z | 2022-03-09T01:03:48 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggan/ukiyoe2photo | huggan | 2022-04-12T13:58:34Z | 36 | 0 | null | [
"arxiv:1703.10593",
"region:us"
] | 2022-04-12T13:58:34Z | 2022-03-29T12:30:34.000Z | 2022-03-29T12:30:34 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
-0.048607271164655685,
-0.31038811802864075,
0.2472076416015625,
0.0040995338931679726,
-0.38853687047958374,
0.01113723311573267,
-0.12735313177108765,
-0.3449961841106415,
0.044890038669109344,
0.6094780564308167,
-0.6503734588623047,
-0.727931022644043,
-0.4177476167678833,
0.1436172127... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
surdan/nerel_short | surdan | 2022-10-25T10:06:49Z | 36 | 0 | null | [
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"region:us"
] | 2022-10-25T10:06:49Z | 2022-04-11T06:34:28.000Z | 2022-04-11T06:34:28 | ---
language: ru
multilinguality: monolingual
task_ids:
- named-entity-recognition
---
### About DataSet
The dataset based on NEREL corpus.
For more information about original data, please visit this [source](https://github.com/dialogue-evaluation/RuNNE)
Example of preparing original data illustrated in <Prepare_original_data.ipynb>
### Additional info
The dataset consist 29 entities, each of them can be as beginner part of entity "B-" as inner "I-".
Frequency for each entity:
- I-AGE: 284
- B-AGE: 247
- B-AWARD: 285
- I-AWARD: 466
- B-CITY: 1080
- I-CITY: 39
- B-COUNTRY: 2378
- I-COUNTRY: 128
- B-CRIME: 214
- I-CRIME: 372
- B-DATE: 2701
- I-DATE: 5437
- B-DISEASE: 136
- I-DISEASE: 80
- B-DISTRICT: 98
- I-DISTRICT: 73
- B-EVENT: 3369
- I-EVENT: 2524
- B-FACILITY: 376
- I-FACILITY: 510
- B-FAMILY: 27
- I-FAMILY: 22
- B-IDEOLOGY: 271
- I-IDEOLOGY: 20
- B-LANGUAGE: 32
- I-LAW: 1196
- B-LAW: 297
- B-LOCATION: 242
- I-LOCATION: 139
- B-MONEY: 147
- I-MONEY: 361
- B-NATIONALITY: 437
- I-NATIONALITY: 41
- B-NUMBER: 1079
- I-NUMBER: 328
- B-ORDINAL: 485
- I-ORDINAL: 6
- B-ORGANIZATION: 3339
- I-ORGANIZATION: 3354
- B-PENALTY: 73
- I-PENALTY: 104
- B-PERCENT: 51
- I-PERCENT: 37
- B-PERSON: 5148
- I-PERSON: 3635
- I-PRODUCT: 48
- B-PRODUCT: 197
- B-PROFESSION: 3869
- I-PROFESSION: 2598
- B-RELIGION: 102
- I-RELIGION: 1
- B-STATE_OR_PROVINCE: 436
- I-STATE_OR_PROVINCE: 154
- B-TIME: 187
- I-TIME: 529
- B-WORK_OF_ART: 133
- I-WORK_OF_ART: 194
You can find mapper for entity ids in <id_to_label_map.pickle> file:
```python
import pickle
with open('id_to_label_map.pickle', 'rb') as f:
mapper = pickle.load(f)
``` | [
-0.3994162976741791,
-0.5296445488929749,
0.26376473903656006,
0.04141983389854431,
0.020919522270560265,
-0.09107659012079239,
-0.2623021602630615,
-0.09717051684856415,
0.42327365279197693,
0.8657799363136292,
-0.44167056679725647,
-0.8771559596061707,
-0.6152567267417908,
0.116634689271... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Yaxin/SemEval2014Task4Raw | Yaxin | 2022-08-15T08:20:00Z | 36 | 7 | null | [
"region:us"
] | 2022-08-15T08:20:00Z | 2022-04-21T13:32:59.000Z | 2022-04-21T13:32:59 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.