id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
atmallen/elements_azaria_mitchell | 2023-07-28T20:11:29.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: train
num_bytes: 41540.8
num_examples: 744
- name: test
num_bytes: 10385.2
num_examples: 186
download_size: 18979
dataset_size: 51926.0
---
# Dataset Card for "elements_azaria_mitchell"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atmallen/animals_azaria_mitchell | 2023-07-28T20:11:33.000Z | [
"region:us"
] | atmallen | null | null | null | 1 | 17 | ---
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: train
num_bytes: 49381.093253968254
num_examples: 806
- name: test
num_bytes: 12375.906746031746
num_examples: 202
download_size: 23238
dataset_size: 61757.0
---
# Dataset Card for "animals_azaria_mitchell"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TitanMLData/arxiv_qa | 2023-08-04T11:38:53.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | TitanMLData | null | null | null | 1 | 17 | ---
task_categories:
- question-answering
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
---
# Arxiv Paper Generative Question Answering
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is made using ChatGPT (text-davinci-003) to generate Question/Answer pairs from Arxiv papers from [this dataset](https://huggingface.co/datasets/ccdv/arxiv-summarization)
### Data Fields
* TextID: references the datarow (paper) in the arxiv summarizer dataset
* Question: question based on the text
* Response: answer
* Text: Full text with the paper as 'context:' and and the question appended as 'question:'. Used for generative question answering usign language modelling
### Data Splits
This dataset contains 2 splits: _train_, and _validation_
| Dataset Split | Number of Instances |
| ------------- | --------------------|
| Train | 32,392 |
| Validation | 6,479 | |
adityarra07/sub_ATC | 2023-08-06T05:38:09.000Z | [
"region:us"
] | adityarra07 | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 136737944.06422067
num_examples: 1000
- name: test
num_bytes: 13673794.406422066
num_examples: 100
download_size: 12473551
dataset_size: 150411738.47064275
---
# Dataset Card for "sub_ATC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
derenrich/wikidata-en-descriptions | 2023-08-05T00:58:51.000Z | [
"task_categories:summarization",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-sa-4.0",
"wikidata",
"wikipedia",
"region:us"
] | derenrich | null | null | null | 1 | 17 | ---
language:
- en
license: cc-by-sa-4.0
size_categories:
- 1M<n<10M
task_categories:
- summarization
tags:
- wikidata
- wikipedia
dataset_info:
features:
- name: output
dtype: string
- name: qid
dtype: string
- name: name
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 7446030683
num_examples: 6243554
download_size: 4214056440
dataset_size: 7446030683
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
georgesung/OpenOrca_35k | 2023-08-06T00:02:39.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | georgesung | null | null | null | 1 | 17 | ---
language:
- en
license: mit
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: OpenOrca_35k
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 63126475
num_examples: 35000
download_size: 36032211
dataset_size: 63126475
---
# Dataset Card for "OpenOrca_35k"
The first 35k examples from [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) |
Sneka/trial | 2023-09-28T11:47:46.000Z | [
"region:us"
] | Sneka | null | null | null | 0 | 17 | Entry not found |
thanhnew2001/travel | 2023-08-11T06:06:53.000Z | [
"region:us"
] | thanhnew2001 | null | null | null | 0 | 17 | Entry not found |
HydraLM/Open_Platypus_standardized | 2023-08-15T19:10:03.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 17 | Entry not found |
usernamedesu/pyg_dataset_markdown | 2023-08-17T16:19:57.000Z | [
"region:us"
] | usernamedesu | null | null | null | 0 | 17 | Entry not found |
thesistranslation/distilled-ccmatrix-de-en | 2023-10-03T09:22:18.000Z | [
"language:de",
"language:en",
"region:us"
] | thesistranslation | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 7314473226
num_examples: 30000000
download_size: 5149999083
dataset_size: 7314473226
language:
- de
- en
---
# Dataset Card for "distilled-ccmatrix-de-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
n28div/IPRE | 2023-08-22T13:23:08.000Z | [
"language:zh",
"arxiv:1907.12801",
"region:us"
] | n28div | null | null | null | 0 | 17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: head
dtype: string
- name: tail
dtype: string
- name: sentence
dtype: string
- name: relation
dtype: string
splits:
- name: train
num_bytes: 43508118
num_examples: 281259
- name: test
num_bytes: 11583338
num_examples: 76826
- name: dev
num_bytes: 5792878
num_examples: 37637
download_size: 30002846
dataset_size: 60884334
language:
- zh
---
# Dataset Card for "IPRE"
Unofficial version of [IPRE: a Dataset for Inter-Personal Relationship Extraction](https://arxiv.org/abs/1907.12801).
All data has been downloaded from the official repository [https://github.com/SUDA-HLT/IPRE/](https://github.com/SUDA-HLT/IPRE/).
## IPRE: a Dataset for Inter-Personal Relationship Extraction
Inter-personal relationship is the basis of human society. In order to automatically identify the relations between persons from texts, we need annotated data for training systems. However, there is a lack of a massive amount of such data so far. To address this situation, we introduce IPRE, a new dataset for inter-personal relationship extraction which aims to facilitate information extraction and knowledge graph construction research. In total, IPRE has over 41,000 labeled sentences for 34 types of relations, including about 9,000 sentences annotated by workers. Our data is the first dataset for inter-personal relationship extraction. Additionally, we define three evaluation tasks based on IPRE and provide the baseline systems for further comparison in future work.
|
TokenBender/unnatural_code_instructions_20M | 2023-08-27T11:16:15.000Z | [
"license:llama2",
"region:us"
] | TokenBender | null | null | null | 4 | 17 | ---
license: llama2
---
|
Porya/DSB | 2023-08-30T06:59:49.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:openrail",
"region:us"
] | Porya | null | null | null | 0 | 17 | ---
license: openrail
task_categories:
- text-generation
language:
- en
pretty_name: DSBTrain
size_categories:
- n<1K
--- |
vikp/xlcost_clean | 2023-09-01T21:45:11.000Z | [
"region:us"
] | vikp | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: text
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 6512530.282012524
num_examples: 6168
download_size: 2228263
dataset_size: 6512530.282012524
---
# Dataset Card for "xlcost_clean"
Python functions extracted from the xlcost benchmark and cleaned. |
dharani96556/kaggle-datasets | 2023-09-01T06:57:14.000Z | [
"license:openrail",
"region:us"
] | dharani96556 | null | null | null | 0 | 17 | ---
license: openrail
---
|
fiveflow/psychology-dataset | 2023-09-05T05:21:51.000Z | [
"region:us"
] | fiveflow | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: 'Unnamed: 0'
dtype: int64
- name: question
dtype: string
- name: type
dtype: string
- name: keywords
dtype: string
- name: resp
dtype: string
- name: new_resp
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3612449
num_examples: 2710
download_size: 1189445
dataset_size: 3612449
---
# Dataset Card for "psychology-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
giganion/pippa_roleplay_standardized | 2023-09-04T20:07:55.000Z | [
"region:us"
] | giganion | null | null | null | 1 | 17 | Entry not found |
ebony59/AO3_fandom_chatbot_1to1 | 2023-09-05T17:39:17.000Z | [
"region:us"
] | ebony59 | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1485153
num_examples: 750
download_size: 385380
dataset_size: 1485153
---
# Dataset Card for "AO3_fandom_chatbot_1to1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
danjacobellis/imagenet_batched_64 | 2023-09-06T15:35:53.000Z | [
"region:us"
] | danjacobellis | null | null | null | 0 | 17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: img_batch
list:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
- name: label_batch
sequence: int64
- name: width
dtype: int64
- name: height
dtype: int64
splits:
- name: train
num_bytes: 29474962100
num_examples: 11497
- name: test
num_bytes: 2439605108
num_examples: 939
- name: validation
num_bytes: 1204052050
num_examples: 463
download_size: 33102976411
dataset_size: 33118619258
---
# Dataset Card for "imagenet_batched_64"
Subset of ImageNet-1k batched by image size
```python
from datasets import load_dataset
import PIL.Image as Image
import io
dataset = load_dataset("danjacobellis/imagenet_batched_64")
img_batch = dataset['train'][0]['img_batch']
img = Image.open(io.BytesIO(img_batch[0]['bytes']))
img
```

|
Falah/female_prompts | 2023-09-06T08:54:47.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 4060495
num_examples: 10000
download_size: 474495
dataset_size: 4060495
---
# Dataset Card for "female_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
silvacarl/guanaco-llama2-1k-test | 2023-09-06T16:05:41.000Z | [
"region:us"
] | silvacarl | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
allen-liao/demo | 2023-09-18T09:55:58.000Z | [
"license:apache-2.0",
"region:us"
] | allen-liao | null | null | null | 0 | 17 | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1033
num_examples: 6
download_size: 2467
dataset_size: 1033
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gauss314/arg-equity | 2023-09-07T19:07:47.000Z | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"license:apache-2.0",
"Merval",
"equity",
"region:us"
] | gauss314 | null | null | null | 0 | 17 | ---
license: apache-2.0
task_categories:
- tabular-classification
- tabular-regression
tags:
- Merval
- equity
pretty_name: Merval daily variations, for deep learning and machine learning tests
---
# Downloading the Options IV SP500 Dataset
This document will guide you through the steps to download the Merval equity dataset from Hugging Face Datasets.
To start, you'll need to install Hugging Face's `datasets` library if you haven't done so already.
You can do this using the following pip command:
```python
!pip install datasets
```
Here's the Python code to load the Merval equity dataset from Hugging Face Datasets and convert it into a pandas DataFrame:
```python
from datasets import load_dataset
import pandas as pd
id = "gauss314/arg-equity"
data = load_dataset(id)
df = pd.DataFrame(data['train'][:])
```
|
huawei-noah/entity_cs | 2023-09-20T07:05:07.000Z | [
"size_categories:100M<n<1B",
"language:af",
"language:am",
"language:ar",
"language:as",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:en",
"language:el"... | huawei-noah | null | null | null | 1 | 17 | ---
license: apache-2.0
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- en
- el
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- nb
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
size_categories:
- 100M<n<1B
---
# Dataset Card for EntityCS
- Repository: https://github.com/huawei-noah/noah-research/tree/master/NLP/EntityCS
- Paper: https://aclanthology.org/2022.findings-emnlp.499.pdf
- Point of Contact: [Fenia Christopoulou](mailto:efstathia.christopoulou@huawei.com), [Chenxi Whitehouse](mailto:chenxi.whitehouse@gmail.com)
## Dataset Description
We use the English Wikipedia and leverage entity information from Wikidata to construct an entity-based Code Switching corpus.
To achieve this, we make use of wikilinks in Wikipedia, i.e. links from one page to another.
We use the English [Wikipedia dump](https://dumps.wikimedia.org/enwiki/latest/) (November 2021) and extract raw text with [WikiExtractor](https://github.com/attardi/wikiextractor) while keeping track of wikilinks.
Since we are interested in creating entity-level CS instances, we only keep sentences containing at least one wikilink.
Given an English sentence with wikilinks, we first map the entity in each wikilink to its corresponding Wikidata ID and
retrieve its available translations from Wikidata.
For each sentence, we check which languages have translations for all entities in that sentence, and consider those as candidates for code-switching.
We ensure all entities are code-switched to the same target language in a single sentence, avoiding noise from including too many languages.
To control the size of the corpus, we generate up to five code-switched sentences for each English sentence.
In particular, if fewer than five languages have translations available for all the entities in a sentence, we create code-switched instances with all of them.
Otherwise, we randomly select five target languages from the candidates.
If no candidate languages can be found, we do not code-switch the sentence, instead, we keep it as part of the English corpus.
Finally, we surround each entity with entity indicators (`<e>`, `</e>`).
## Supported Tasks and Leaderboards
The dataset was developped for intermediate pre-training of language models.
In the paper we further fine-tune models on entity-centric downstream tasks, such as NER.
## Languages
The dataset covers 93 languages in total, including English.
## Data Statistics
| Statistic | Count |
|:------------------------------|------------:|
| Languages | 93 |
| English Sentences | 54,469,214 |
| English Entities | 104,593,076 |
| Average Sentence Length | 23.37 |
| Average Entities per Sentence | 2 |
| CS Sentences per EN Sentence | ≤ 5 |
| CS Sentences | 231,124,422 |
| CS Entities | 420,907,878 |
## Data Fields
Each instance contains 4 fields:
- `id`: Unique ID of each sentence
- `language`: The language of choice for entity code-switching of the given sentence
- `en_sentence`: The original English sentence
- `cs_sentence`: The code-switched sentence
In the case of the English subset, the `cs_sentence` field does not exist as the sentences are not code-switched.
An example of what a data instance looks like:
```
{
'id': 19,
'en_sentence': 'The subs then enter a <en>coral reef</en> with many bright reflective colors.',
'cs_sentence': 'The subs then enter a <de>Korallenriff</de> with many bright reflective colors.',
'language': 'de'
}
```
## Data Splits
There is a single data split for each language. You can randomly select a few examples from each language to serve as validation set.
## Limitations
An important limitation of the work is that before code-switching an entity, its morphological inflection is not checked.
This can lead to potential errors as the form of the CS entity might not agree with the surrounding context (e.g. plural).
There should be few cases as such, as we are only switching entities. However, this should be improved in a later version of the corpus.
Secondly, the diversity of languages used to construct the EntityCS corpus is restricted to the overlap between the available languages in WikiData and XLM-R pre-training.
This choice was for a better comparison between models, however it is possible to extend the corpus with more languages that XLM-R does not cover, following
the procedure presented in the paper.
## Citation
**BibTeX**
```html
@inproceedings{whitehouse-etal-2022-entitycs,
title = "{E}ntity{CS}: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching",
author = "Whitehouse, Chenxi and
Christopoulou, Fenia and
Iacobacci, Ignacio",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.499",
pages = "6698--6714"
}
```
**APA**
```html
Whitehouse, C., Christopoulou, F., & Iacobacci, I. (2022). EntityCS: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching. In Findings of the Association for Computational Linguistics: EMNLP 2022.
```
|
amphora/kobest-trans-en | 2023-09-09T15:02:24.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | amphora | The dataset contains data for KoBEST dataset | null | null | 1 | 17 | ---
license: cc-by-sa-4.0
---
|
NewstaR/AverageData | 2023-09-10T12:25:45.000Z | [
"region:us"
] | NewstaR | null | null | null | 0 | 17 | Entry not found |
BrunoGR/Emo_support_11kBalanced | 2023-09-19T22:54:23.000Z | [
"region:us"
] | BrunoGR | null | null | null | 0 | 17 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: texto
dtype: string
- name: etiqueta
dtype: string
splits:
- name: test
num_bytes: 152156
num_examples: 1309
- name: train
num_bytes: 12765622
num_examples: 121708
- name: validation
num_bytes: 253200
num_examples: 2200
download_size: 8063810
dataset_size: 13170978
---
# Dataset Card for "Emo_support_11kBalanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sahithya20/example | 2023-09-12T09:16:17.000Z | [
"region:us"
] | sahithya20 | null | null | null | 0 | 17 | Entry not found |
open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base | 2023-09-13T01:26:29.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 17 | ---
pretty_name: Evaluation run of TigerResearch/tigerbot-70b-base
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TigerResearch/tigerbot-70b-base](https://huggingface.co/TigerResearch/tigerbot-70b-base)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-09-13T01:25:14.196261](https://huggingface.co/datasets/open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base/blob/main/results_2023-09-13T01-25-14.196261.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.653570860469327,\n\
\ \"acc_stderr\": 0.03227398917123444,\n \"acc_norm\": 0.6574888469922233,\n\
\ \"acc_norm_stderr\": 0.03225188000122814,\n \"mc1\": 0.3598531211750306,\n\
\ \"mc1_stderr\": 0.016801860466677164,\n \"mc2\": 0.5275754799335507,\n\
\ \"mc2_stderr\": 0.014974040188905885\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5955631399317406,\n \"acc_stderr\": 0.014342036483436177,\n\
\ \"acc_norm\": 0.6245733788395904,\n \"acc_norm_stderr\": 0.014150631435111728\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6339374626568413,\n\
\ \"acc_stderr\": 0.00480742334322458,\n \"acc_norm\": 0.8360884285998805,\n\
\ \"acc_norm_stderr\": 0.00369438736117764\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909281,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909281\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7368421052631579,\n \"acc_stderr\": 0.03583496176361073,\n\
\ \"acc_norm\": 0.7368421052631579,\n \"acc_norm_stderr\": 0.03583496176361073\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6867924528301886,\n \"acc_stderr\": 0.028544793319055326,\n\
\ \"acc_norm\": 0.6867924528301886,\n \"acc_norm_stderr\": 0.028544793319055326\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7916666666666666,\n\
\ \"acc_stderr\": 0.03396116205845334,\n \"acc_norm\": 0.7916666666666666,\n\
\ \"acc_norm_stderr\": 0.03396116205845334\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\"\
: 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5895953757225434,\n\
\ \"acc_stderr\": 0.03750757044895537,\n \"acc_norm\": 0.5895953757225434,\n\
\ \"acc_norm_stderr\": 0.03750757044895537\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.71,\n \"acc_stderr\": 0.04560480215720685,\n \"acc_norm\": 0.71,\n\
\ \"acc_norm_stderr\": 0.04560480215720685\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6042553191489362,\n \"acc_stderr\": 0.03196758697835363,\n\
\ \"acc_norm\": 0.6042553191489362,\n \"acc_norm_stderr\": 0.03196758697835363\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.35964912280701755,\n\
\ \"acc_stderr\": 0.045144961328736334,\n \"acc_norm\": 0.35964912280701755,\n\
\ \"acc_norm_stderr\": 0.045144961328736334\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370333,\n\
\ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370333\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.43915343915343913,\n \"acc_stderr\": 0.025559920550531003,\n \"\
acc_norm\": 0.43915343915343913,\n \"acc_norm_stderr\": 0.025559920550531003\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8161290322580645,\n\
\ \"acc_stderr\": 0.022037217340267826,\n \"acc_norm\": 0.8161290322580645,\n\
\ \"acc_norm_stderr\": 0.022037217340267826\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.46798029556650245,\n \"acc_stderr\": 0.03510766597959215,\n\
\ \"acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.03510766597959215\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\"\
: 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8363636363636363,\n \"acc_stderr\": 0.028887872395487953,\n\
\ \"acc_norm\": 0.8363636363636363,\n \"acc_norm_stderr\": 0.028887872395487953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8535353535353535,\n \"acc_stderr\": 0.02519092111460393,\n \"\
acc_norm\": 0.8535353535353535,\n \"acc_norm_stderr\": 0.02519092111460393\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328971,\n\
\ \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328971\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6435897435897436,\n \"acc_stderr\": 0.0242831405294673,\n \
\ \"acc_norm\": 0.6435897435897436,\n \"acc_norm_stderr\": 0.0242831405294673\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3148148148148148,\n \"acc_stderr\": 0.028317533496066475,\n \
\ \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.028317533496066475\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.029344572500634335,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.029344572500634335\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.44370860927152317,\n \"acc_stderr\": 0.04056527902281731,\n \"\
acc_norm\": 0.44370860927152317,\n \"acc_norm_stderr\": 0.04056527902281731\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8697247706422019,\n \"acc_stderr\": 0.014431862852473264,\n \"\
acc_norm\": 0.8697247706422019,\n \"acc_norm_stderr\": 0.014431862852473264\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6018518518518519,\n \"acc_stderr\": 0.033384734032074016,\n \"\
acc_norm\": 0.6018518518518519,\n \"acc_norm_stderr\": 0.033384734032074016\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8676470588235294,\n \"acc_stderr\": 0.023784297520918853,\n \"\
acc_norm\": 0.8676470588235294,\n \"acc_norm_stderr\": 0.023784297520918853\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.869198312236287,\n \"acc_stderr\": 0.02194876605947076,\n \
\ \"acc_norm\": 0.869198312236287,\n \"acc_norm_stderr\": 0.02194876605947076\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7085201793721974,\n\
\ \"acc_stderr\": 0.03050028317654585,\n \"acc_norm\": 0.7085201793721974,\n\
\ \"acc_norm_stderr\": 0.03050028317654585\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.03498149385462473,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.03498149385462473\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.036401182719909456,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.036401182719909456\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.031921934489347235,\n\
\ \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.031921934489347235\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n\
\ \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n\
\ \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.0398913985953177,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.0398913985953177\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n\
\ \"acc_stderr\": 0.02158649400128137,\n \"acc_norm\": 0.8760683760683761,\n\
\ \"acc_norm_stderr\": 0.02158649400128137\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \
\ \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252609\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8199233716475096,\n\
\ \"acc_stderr\": 0.01374079725857982,\n \"acc_norm\": 0.8199233716475096,\n\
\ \"acc_norm_stderr\": 0.01374079725857982\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7456647398843931,\n \"acc_stderr\": 0.02344582627654554,\n\
\ \"acc_norm\": 0.7456647398843931,\n \"acc_norm_stderr\": 0.02344582627654554\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.34972067039106147,\n\
\ \"acc_stderr\": 0.015949308790233645,\n \"acc_norm\": 0.34972067039106147,\n\
\ \"acc_norm_stderr\": 0.015949308790233645\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6993464052287581,\n \"acc_stderr\": 0.02625605383571896,\n\
\ \"acc_norm\": 0.6993464052287581,\n \"acc_norm_stderr\": 0.02625605383571896\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7556270096463023,\n\
\ \"acc_stderr\": 0.024406162094668886,\n \"acc_norm\": 0.7556270096463023,\n\
\ \"acc_norm_stderr\": 0.024406162094668886\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7530864197530864,\n \"acc_stderr\": 0.023993501709042128,\n\
\ \"acc_norm\": 0.7530864197530864,\n \"acc_norm_stderr\": 0.023993501709042128\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5319148936170213,\n \"acc_stderr\": 0.029766675075873873,\n \
\ \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.029766675075873873\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.012770236105969923,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.012770236105969923\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6544117647058824,\n \"acc_stderr\": 0.028888193103988633,\n\
\ \"acc_norm\": 0.6544117647058824,\n \"acc_norm_stderr\": 0.028888193103988633\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6993464052287581,\n \"acc_stderr\": 0.018550634502952964,\n \
\ \"acc_norm\": 0.6993464052287581,\n \"acc_norm_stderr\": 0.018550634502952964\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.763265306122449,\n \"acc_stderr\": 0.027212835884073125,\n\
\ \"acc_norm\": 0.763265306122449,\n \"acc_norm_stderr\": 0.027212835884073125\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8805970149253731,\n\
\ \"acc_stderr\": 0.02292879327721974,\n \"acc_norm\": 0.8805970149253731,\n\
\ \"acc_norm_stderr\": 0.02292879327721974\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466125,\n \
\ \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466125\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3598531211750306,\n\
\ \"mc1_stderr\": 0.016801860466677164,\n \"mc2\": 0.5275754799335507,\n\
\ \"mc2_stderr\": 0.014974040188905885\n }\n}\n```"
repo_url: https://huggingface.co/TigerResearch/tigerbot-70b-base
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|arc:challenge|25_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hellaswag|10_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T01-25-14.196261.parquet'
- config_name: results
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- results_2023-09-13T01-25-14.196261.parquet
- split: latest
path:
- results_2023-09-13T01-25-14.196261.parquet
---
# Dataset Card for Evaluation run of TigerResearch/tigerbot-70b-base
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TigerResearch/tigerbot-70b-base
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TigerResearch/tigerbot-70b-base](https://huggingface.co/TigerResearch/tigerbot-70b-base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-13T01:25:14.196261](https://huggingface.co/datasets/open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base/blob/main/results_2023-09-13T01-25-14.196261.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.653570860469327,
"acc_stderr": 0.03227398917123444,
"acc_norm": 0.6574888469922233,
"acc_norm_stderr": 0.03225188000122814,
"mc1": 0.3598531211750306,
"mc1_stderr": 0.016801860466677164,
"mc2": 0.5275754799335507,
"mc2_stderr": 0.014974040188905885
},
"harness|arc:challenge|25": {
"acc": 0.5955631399317406,
"acc_stderr": 0.014342036483436177,
"acc_norm": 0.6245733788395904,
"acc_norm_stderr": 0.014150631435111728
},
"harness|hellaswag|10": {
"acc": 0.6339374626568413,
"acc_stderr": 0.00480742334322458,
"acc_norm": 0.8360884285998805,
"acc_norm_stderr": 0.00369438736117764
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909281,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909281
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7368421052631579,
"acc_stderr": 0.03583496176361073,
"acc_norm": 0.7368421052631579,
"acc_norm_stderr": 0.03583496176361073
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6867924528301886,
"acc_stderr": 0.028544793319055326,
"acc_norm": 0.6867924528301886,
"acc_norm_stderr": 0.028544793319055326
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7916666666666666,
"acc_stderr": 0.03396116205845334,
"acc_norm": 0.7916666666666666,
"acc_norm_stderr": 0.03396116205845334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5895953757225434,
"acc_stderr": 0.03750757044895537,
"acc_norm": 0.5895953757225434,
"acc_norm_stderr": 0.03750757044895537
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.71,
"acc_stderr": 0.04560480215720685,
"acc_norm": 0.71,
"acc_norm_stderr": 0.04560480215720685
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6042553191489362,
"acc_stderr": 0.03196758697835363,
"acc_norm": 0.6042553191489362,
"acc_norm_stderr": 0.03196758697835363
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.35964912280701755,
"acc_stderr": 0.045144961328736334,
"acc_norm": 0.35964912280701755,
"acc_norm_stderr": 0.045144961328736334
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370333,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370333
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.43915343915343913,
"acc_stderr": 0.025559920550531003,
"acc_norm": 0.43915343915343913,
"acc_norm_stderr": 0.025559920550531003
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8161290322580645,
"acc_stderr": 0.022037217340267826,
"acc_norm": 0.8161290322580645,
"acc_norm_stderr": 0.022037217340267826
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.46798029556650245,
"acc_stderr": 0.03510766597959215,
"acc_norm": 0.46798029556650245,
"acc_norm_stderr": 0.03510766597959215
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8363636363636363,
"acc_stderr": 0.028887872395487953,
"acc_norm": 0.8363636363636363,
"acc_norm_stderr": 0.028887872395487953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8535353535353535,
"acc_stderr": 0.02519092111460393,
"acc_norm": 0.8535353535353535,
"acc_norm_stderr": 0.02519092111460393
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328971,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328971
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6435897435897436,
"acc_stderr": 0.0242831405294673,
"acc_norm": 0.6435897435897436,
"acc_norm_stderr": 0.0242831405294673
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.028317533496066475,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.028317533496066475
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.029344572500634335,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.029344572500634335
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.44370860927152317,
"acc_stderr": 0.04056527902281731,
"acc_norm": 0.44370860927152317,
"acc_norm_stderr": 0.04056527902281731
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8697247706422019,
"acc_stderr": 0.014431862852473264,
"acc_norm": 0.8697247706422019,
"acc_norm_stderr": 0.014431862852473264
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6018518518518519,
"acc_stderr": 0.033384734032074016,
"acc_norm": 0.6018518518518519,
"acc_norm_stderr": 0.033384734032074016
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8676470588235294,
"acc_stderr": 0.023784297520918853,
"acc_norm": 0.8676470588235294,
"acc_norm_stderr": 0.023784297520918853
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.869198312236287,
"acc_stderr": 0.02194876605947076,
"acc_norm": 0.869198312236287,
"acc_norm_stderr": 0.02194876605947076
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7085201793721974,
"acc_stderr": 0.03050028317654585,
"acc_norm": 0.7085201793721974,
"acc_norm_stderr": 0.03050028317654585
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.03498149385462473,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.03498149385462473
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.036401182719909456,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.036401182719909456
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7914110429447853,
"acc_stderr": 0.031921934489347235,
"acc_norm": 0.7914110429447853,
"acc_norm_stderr": 0.031921934489347235
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.0398913985953177,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.0398913985953177
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.02158649400128137,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.02158649400128137
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252609,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252609
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8199233716475096,
"acc_stderr": 0.01374079725857982,
"acc_norm": 0.8199233716475096,
"acc_norm_stderr": 0.01374079725857982
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.02344582627654554,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.02344582627654554
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.34972067039106147,
"acc_stderr": 0.015949308790233645,
"acc_norm": 0.34972067039106147,
"acc_norm_stderr": 0.015949308790233645
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6993464052287581,
"acc_stderr": 0.02625605383571896,
"acc_norm": 0.6993464052287581,
"acc_norm_stderr": 0.02625605383571896
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7556270096463023,
"acc_stderr": 0.024406162094668886,
"acc_norm": 0.7556270096463023,
"acc_norm_stderr": 0.024406162094668886
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7530864197530864,
"acc_stderr": 0.023993501709042128,
"acc_norm": 0.7530864197530864,
"acc_norm_stderr": 0.023993501709042128
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5319148936170213,
"acc_stderr": 0.029766675075873873,
"acc_norm": 0.5319148936170213,
"acc_norm_stderr": 0.029766675075873873
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5,
"acc_stderr": 0.012770236105969923,
"acc_norm": 0.5,
"acc_norm_stderr": 0.012770236105969923
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6544117647058824,
"acc_stderr": 0.028888193103988633,
"acc_norm": 0.6544117647058824,
"acc_norm_stderr": 0.028888193103988633
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6993464052287581,
"acc_stderr": 0.018550634502952964,
"acc_norm": 0.6993464052287581,
"acc_norm_stderr": 0.018550634502952964
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.763265306122449,
"acc_stderr": 0.027212835884073125,
"acc_norm": 0.763265306122449,
"acc_norm_stderr": 0.027212835884073125
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8805970149253731,
"acc_stderr": 0.02292879327721974,
"acc_norm": 0.8805970149253731,
"acc_norm_stderr": 0.02292879327721974
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.028762349126466125,
"acc_norm": 0.91,
"acc_norm_stderr": 0.028762349126466125
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3598531211750306,
"mc1_stderr": 0.016801860466677164,
"mc2": 0.5275754799335507,
"mc2_stderr": 0.014974040188905885
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
universeTBD/arxiv-bit-flip-cs.LG | 2023-09-24T00:11:18.000Z | [
"region:us"
] | universeTBD | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: bit
dtype: string
- name: flip
dtype: string
- name: title
dtype: string
- name: categories
dtype: string
- name: abstract
dtype: string
- name: authors
dtype: string
- name: doi
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 229044314
num_examples: 100039
download_size: 127335112
dataset_size: 229044314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv-bit-flip-cs.LG"
This dataset contains "Bit-Flips," structured representations extracted from the abstracts of ArXiv papers, specifically in the category of `cs.LG` (Machine Learning). These Bit-Flips aim to encapsulate the essence of the research by identifying the conventional belief or 'status quo' the abstract challenges (Bit) and the counterargument or innovative approach that flips the Bit (Flip).
## Bit-Flip Concept
A Bit-Flip serves as a two-part schema:
* _Bit_: It identifies the conventional belief or assumption that the research implicitly or explicitly challenges. It is composed of three sentences that are logically connected.
* _Flip_: It formulates the counterargument or innovative approach that flips the conventional belief or Bit. It also consists of three logically connected sentences.
## Data Collection
The dataset focuses on the ArXiv category of `cs.LG` (Machine Learning). The dataset was created to understand the paradigm shifts or challenges to conventional wisdom that are presented in new research, encapsulated through the Bit-Flip schema.
## Methodology
The data was processed using a Python script that performs the following steps:
1. The script generates a custom prompt based on each abstract, using a predefined template that explains the Bit-Flip concept.
2. An Azure model is used to generate a response to the custom prompt.
3. The response is parsed to extract a JSON-like structure containing the Bit and the Flip.
4. Each Bit and Flip is saved along with the title of the paper.
5. The script uses multithreading to speed up the data processing and can handle a batch of abstracts in each run. The processed data is saved in a CSV file. |
fondant-ai/fondant-cc-25m | 2023-09-28T08:00:56.000Z | [
"task_categories:text-to-image",
"size_categories:10M<n<100M",
"license:cc",
"art",
"region:us"
] | fondant-ai | null | null | null | 20 | 17 | ---
license: cc
task_categories:
- text-to-image
tags:
- art
size_categories:
- 10M<n<100M
---
# Dataset Card for Fondant Creative Commons 25 million (fondant-cc-25m)

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Changelog](#changelog)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [How to use it](#how-to-use-it)
- [How to contribute](#how-to-contribute)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Data Collection and Preprocessing](#data-collection-and-preprocessing)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Opting out](#opting-out)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Disclaimer](#disclaimer)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Contact](#contact)
## Dataset Description
- **Homepage:** https://www.fondant.ai/
- **Repository:** https://github.com/ml6team/fondant
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** info@fondant.ai
### Changelog
|Release|Description|
|-|-|
|v0.1| Release of the Fondant-cc-25m dataset
### Dataset Summary
Fondant-cc-25m contains 25 million image URLs with their respective [Creative Commons](https://creativecommons.org/)
license information collected from the [Common Crawl web corpus](https://commoncrawl.org/).
The dataset was created using [Fondant](https://fondant.ai), an open source framework that aims to simplify and speed up
large-scale data processing by making self-contained pipeline components reusable across pipelines, infrastructures and shareable within the community.
### Supported Tasks and Leaderboards
This dataset can be used for training or fine-tuning image generation or computer vision models.
### How to use it
To execute the pipeline locally, you must have [docker compose](https://docs.docker.com/compose/),
[Python](https://python.org) >=3.8 and [Git](https://git-scm.com/) installed on your system.
To ensure a successful example run, please allocate at least 8GB of RAM to your Docker environment.
**Note:** For Apple M1/M2 ship users:
- Make sure that Docker uses linux/amd64 platform and not arm64. In Docker Dashboard go to Settings>Features in development, make sure to uncheck `Use containerid for pulling and storing images`.
- For improved execution speed, check the box that says `Use Rosetta for x86/amd64 emulation on Apple Silicon`.
We have prepared a sample Fondant pipeline for downloading the dataset.
1) Install Fondant by running:
```bash
pip install fondant
```
2) Clone the [Fondant GitHub repository](https://github.com/ml6team/fondant)
```bash
git clone https://github.com/ml6team/fondant.git
```
3) Make sure that Docker Compose is running, navigate to `fondant/examples/pipelines/filter-cc-25m`, and initiate the pipeline by executing:
```bash
fondant run pipeline --local
```
**Note:** For local testing purposes, the pipeline will only download the first 100,000 images.
If you want to download the full dataset, you will need to modify the component arguments in the `pipeline.py` file,
specifically the following part:
```python
load_from_hf_hub = ComponentOp(
component_dir="components/load_from_hf_hub",
arguments={
"dataset_name": "fondant-ai/fondant-cc-25m",
"column_name_mapping": load_component_column_mapping,
"n_rows_to_load": <HERE INSERT THE NUMBER OF IMAGES YOU WANT TO DOWNLOAD>
},
)
```
4) To visually inspect the results quickly, you can use:
```bash
fondant explore --base_path ./data
```
5) You can also choose to download images to your local machine if you prefer, we have provided an [example script](https://huggingface.co/datasets/fondant-ai/fondant-cc-25m/blob/main/extract_images.py)
that enabled this:
To run the script, you can simply execute the following:
```bash
python extract_images.py --parquet_file <Path to the Parquet file or folder containing the images> --save_folder <The folder where to save the images to>
```
### How to contribute
If you want to contribute to the dataset, the best way is to help us develop pipeline components for further processing.
Creating custom pipelines for specific purposes requires different building blocks.
Fondant pipelines can mix reusable components and custom components.

Components we are currently looking to add are the following ([GitHub issues](https://github.com/ml6team/fondant/issues?q=is%3Aissue+is%3Aopen+label%3A%22Component+Contribution%22)):
- 👯 Image-based deduplication
- 🖥️✎ Automatic captioning
- 🎨 Visual quality / aesthetic quality estimation
- 🔏 Watermark detection
- 🔞 Not safe for work (NSFW) content detection
- 📇 CLIP embedding generation
- 😐 Face detection
- 🙋🏻♂️ Personal Identifiable Information (PII) detection
- 📝 Text detection
- 🤖 AI generated image detection
- 👬 Image-text CLIP similarity
- 👨🎨 Any components that you propose to develop
We are also looking for core framework contributors and users who are willing to give feedback on usability and suggest potential improvements
## Dataset Structure
### Data Instances
Each data instance corresponds to one image. The URL of the image is in the `image_url` feature, and other features (`alt_text`, `webpage_url`, etc) provide some
metadata. Note that images have been deduplicated only based on their URLs.
### Data Fields
- `image_url` (string): image url to download the image
- `alt_text` (string): alternative text of the image
- `webpage_url` (string): webpage source of the image
- `license_type` (string): creative commons license type of the image
- `license_location` (string): location of the license on the webpage
- `surt_url` (string): sort friendly image url with top level domain as the prefix
### Data Splits
We do not provide any canonical splits for fondant-cc-25m.
## Dataset Creation
### Curation Rationale
Current AI image generation models such as Stable Diffusion and Dall-E are trained on hundreds of millions of images from the public Internet
including copyrighted work. This creates legal risks and uncertainties for users of these images and is unfair towards copyright holders who
may not want their proprietary work reproduced without consent.
By releasing a Creative Commons image dataset, we hope to mitigate legal risks and empower ethical AI development that respects copyright.
This dataset is the first step towards our goal of a 500M Creative Commons image dataset.
### Source Data
fondant-cc-25m is built from CommonCrawl dumps. These dumps are constructed from crawling publicly available web pages.
### Data Collection and Preprocessing
Permissive licenses have minimal restrictions on how the image can be copied, modified, and redistributed.
The full list of licenses can be found [here](https://creativecommons.org/about/cclicenses/).
We examined HTML tags of the webpages for the presence of Creative Commons license URLs. A webpage was marked permissive only when a license URL was found in
its footer, aside or sidebar. This was the case only in around 0.164% of a 100k random sample from Common Crawl. This suggests that image generation models
trained on a random sample from the public internet may be trained on up to 99.836% copyrighted images.
Subsequently, all the image URLs present on the web page were collected together with the license information. A manual check of a random
sample of 1032 images showed that 96.32% were attributed the correct license whil 3.68% were not.
False positives could be due to parsing errors but also incorrect attributions: images indicated by the publisher to be CC which are not.
More information on our approach can be found in [this blogpost](https://blog.ml6.eu/ai-image-generation-without-copyright-infringement-a9901b64541c).
### Personal and Sensitive Information
The released dataset may contain sensitive information such as names, emails and addresses that have previously been published to the Internet.
In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting
and publishing their [open-access](https://en.wikipedia.org/wiki/Open_access) research. Personal information should not be used for spamming purposes,
including sending unsolicited emails or selling of personal information. Complaints, removal requests, and "do not contact" requests can be sent to info@fondant.ai.
The PII filtering pipeline for this dataset is still a work in progress. Researchers that wish to contribute to the anonymization pipeline of the project can join
[here](https://github.com/ml6team/fondant/tree/main#-contributing).
### Opting out
Fondant-cc-25m is based on CommonCrawl. Their crawler honors opt-out requests in the robots.txt, see the
[CC FAQ](https://commoncrawl.org/big-picture/frequently-asked-questions/) for details.
We are giving the public the ability to have their image removed from the dataset upon request. The process for submitting and enacting removal requests will keep
evolving throughout the project as we receive feedback and build up more data governance tools.
If you'd like to have your data removed from the dataset, [contact us](mailto:info@fondant.ai).
## Considerations for Using the Data
### Disclaimer
Fondant is making significant efforts to respect the intellectual property rights of third parties by publishing a dataset of
Creative Commons licensed images. Under no circumstances can Fondant be held liable by a third party for (i) the accuracy or correctness
of the content, (ii) an alleged infringement of intellectual property rights or (iii) any other alleged claim, action, injunction or suit
resulting from the publication or use of the dataset.
### Discussion of Biases
As toxic or biased data is prevalent on the internet, it is possible that our dataset contains such content.
## Additional Information
### Dataset Curators
1. Sharon Grundmann, ML6, sharon.grundmann@ml6.eu
2. Matthias Richter, ML6, matthias.richter@ml6.eu
3. Robbe Sneyders, ML6, robbe.sneyders@ml6.eu
### Licensing Information
Fondant-cc-25m is a collection of images with various Creative Commons and other public licenses. Any use of all or part of the images gathered in Fondant-cc-25m
must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
The list of Creative Commons license types included in the dataset can be found [here](https://creativecommons.org/about/cclicenses/).
### Contact
- Email: [info@fondant.ai](mailto:info@fondant.ai)
- Discord: [https://discord.gg/HnTdWhydGp](https://discord.gg/HnTdWhydGp) |
TristanPermentier/some_chives_ones | 2023-09-17T11:27:02.000Z | [
"region:us"
] | TristanPermentier | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 21662024.0
num_examples: 29
download_size: 21484795
dataset_size: 21662024.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "some_chives_ones"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pengold/xlsum-vietnamese | 2023-09-17T17:41:23.000Z | [
"region:us"
] | pengold | null | null | null | 0 | 17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: text
dtype: string
- name: prefix_text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 528740548
num_examples: 32111
- name: test
num_bytes: 40990709
num_examples: 4013
- name: validation
num_bytes: 40943030
num_examples: 4013
download_size: 304960271
dataset_size: 610674287
---
# Dataset Card for "xlsum-vietnamese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ahmed000000000/cybersec | 2023-09-17T21:16:25.000Z | [
"region:us"
] | ahmed000000000 | null | null | null | 0 | 17 | Entry not found |
ExecrableChromosphere/dataset | 2023-09-18T11:23:57.000Z | [
"region:us"
] | ExecrableChromosphere | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 38873
num_examples: 228
download_size: 20752
dataset_size: 38873
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cgoosen/prompt_injection_password_or_secret | 2023-09-20T12:12:34.000Z | [
"license:afl-3.0",
"region:us"
] | cgoosen | null | null | null | 0 | 17 | ---
license: afl-3.0
---
|
kewu93/three_styles | 2023-09-19T08:50:51.000Z | [
"license:cc",
"region:us"
] | kewu93 | null | null | null | 0 | 17 | ---
license: cc
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 233434033.875
num_examples: 9897
- name: val
num_bytes: 98455383.139
num_examples: 4243
download_size: 339129751
dataset_size: 331889417.014
---
|
ricardosantoss/mimic | 2023-09-19T18:02:19.000Z | [
"region:us"
] | ricardosantoss | null | null | null | 0 | 17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: TEXT
dtype: string
- name: ICD9_CODE
sequence: string
splits:
- name: train
num_bytes: 350160331
num_examples: 39354
- name: test
num_bytes: 44827959
num_examples: 5000
- name: validation
num_bytes: 44381049
num_examples: 5000
download_size: 245192456
dataset_size: 439369339
---
# Dataset Card for "mimic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
goendalf666/sales-conversations | 2023-10-04T20:39:04.000Z | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"sales",
"arxiv:2306.11644",
"region:us"
] | goendalf666 | null | null | null | 2 | 17 | ---
language:
- en
size_categories:
- 1K<n<10K
task_categories:
- conversational
dataset_info:
features:
- name: '0'
dtype: string
- name: '1'
dtype: string
- name: '2'
dtype: string
- name: '3'
dtype: string
- name: '4'
dtype: string
- name: '5'
dtype: string
- name: '6'
dtype: string
- name: '7'
dtype: string
- name: '8'
dtype: string
- name: '9'
dtype: string
- name: '10'
dtype: string
- name: '11'
dtype: string
- name: '12'
dtype: string
- name: '13'
dtype: string
- name: '14'
dtype: string
- name: '15'
dtype: string
- name: '16'
dtype: string
- name: '17'
dtype: string
- name: '18'
dtype: string
- name: '19'
dtype: string
splits:
- name: train
num_bytes: 6821725
num_examples: 3412
download_size: 2644154
dataset_size: 6821725
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- sales
---
# Dataset Card for "sales-conversations"
This dataset was created for the purpose of training a sales agent chatbot that can convince people.
The initial idea came from: textbooks is all you need https://arxiv.org/abs/2306.11644
gpt-3.5-turbo was used for the generation
# Structure
The conversations have a customer and a salesman which appear always in changing order. customer, salesman, customer, salesman, etc.
The customer always starts the conversation
Who ends the conversation is not defined.
# Generation
Note that a textbook dataset is mandatory for this conversation generation. This examples rely on the following textbook dataset:
https://huggingface.co/datasets/goendalf666/sales-textbook_for_convincing_and_selling
The data generation code can be found here: https://github.com/tom813/salesGPT_foundation/blob/main/data_generation/textbook_and_conversation_gen.py
The following prompt was used to create a conversation
```
def create_random_prompt(chapter, roles=["Customer", "Salesman"], range_vals=(3, 7), industries=None):
if industries is None:
industries = ["tech", "health", "finance"] # default industries; replace with your default list if different
x = random.randint(*range_vals)
y = 0
for i in reversed(range(3, 9)): # Generalized loop for range of values
if i * x < 27:
y = i
break
conversation_structure = ""
for i in range(1, x+1):
conversation_structure += f"""
{roles[0]}: #{i}. sentence of {roles[0].lower()}
{roles[1]}: #{i}. sentence of {roles[1].lower()}"""
prompt = f"""Here is a chapter from a textbook about convincing people.
The purpose of this data is to use it to fine tune a llm.
Generate conversation examples that are based on the chapter that is provided and would help an ai to learn the topic by examples.
Focus only on the topic that is given in the chapter when generating the examples.
Let the example be in the {random.choice(industries)} industry.
Follow this structure and put each conversation in a list of objects in json format. Only return the json nothing more:
{conversation_structure}
Generate {y} lists of those conversations
Chapter:{chapter}"""
return prompt
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AmanJolly/test_LLAMA_re | 2023-09-22T11:54:33.000Z | [
"region:us"
] | AmanJolly | null | null | null | 0 | 17 | Entry not found |
lokesh2002/construction_sample_dataset2 | 2023-09-22T21:05:59.000Z | [
"region:us"
] | lokesh2002 | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4214015.0
num_examples: 10
download_size: 4162284
dataset_size: 4214015.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "construction_sample_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aditijha/instruct_v1_10k | 2023-09-22T21:16:57.000Z | [
"region:us"
] | aditijha | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 7376479.683021413
num_examples: 10000
download_size: 3930326
dataset_size: 7376479.683021413
---
# Dataset Card for "instruct_v1_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rainbow/Andy_Lau | 2023-09-24T12:41:43.000Z | [
"region:us"
] | rainbow | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 6985835.0
num_examples: 16
download_size: 6986820
dataset_size: 6985835.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Andy_Lau"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
berkgungor/llama2_golf | 2023-09-24T16:20:27.000Z | [
"license:llama2",
"region:us"
] | berkgungor | null | null | null | 0 | 17 | ---
license: llama2
---
|
abhinav-jha/abusive-calls | 2023-09-25T10:15:11.000Z | [
"region:us"
] | abhinav-jha | null | null | null | 0 | 17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 52348881.0
num_examples: 948
- name: test
num_bytes: 52348880.0
num_examples: 948
download_size: 95446094
dataset_size: 104697761.0
---
# Dataset Card for "abusive-calls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
paulopirozelli/pira | 2023-10-04T13:52:11.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:pt",
"language:en",
"license:cc-by-4.0",
"climate",
"arxiv:2309.10945",
"region:us"
] | paulopirozelli | null | null | null | 1 | 17 | ---
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: mcqa
data_files:
- split: train
path: mcqa/train-*
- split: validation
path: mcqa/validation-*
- split: test
path: mcqa/test-*
- config_name: paraphrases
data_files:
- split: train
path: paraphrases/train-*
- config_name: pira_version1
data_files:
- split: train
path: pira_version1/train-*
dataset_info:
- config_name: default
features:
- name: id_qa
dtype: string
- name: corpus
dtype: int64
- name: question_en_origin
dtype: string
- name: question_pt_origin
dtype: string
- name: question_en_paraphase
dtype: string
- name: question_pt_paraphase
dtype: string
- name: answer_en_origin
dtype: string
- name: answer_pt_origin
dtype: string
- name: answer_en_validate
dtype: string
- name: answer_pt_validate
dtype: string
- name: abstract
dtype: string
- name: eid_article_scopus
dtype: string
- name: question_generic
dtype: float64
- name: answer_in_text
dtype: float64
- name: answer_difficulty
dtype: float64
- name: question_meaningful
dtype: float64
- name: answer_equivalent
dtype: float64
- name: question_type
dtype: string
- name: abstract_translated_pt
dtype: string
- name: pt_question_translated_to_en
dtype: string
- name: at_labels
dtype: float64
splits:
- name: train
num_bytes: 8002269
num_examples: 1806
- name: validation
num_bytes: 994524
num_examples: 225
- name: test
num_bytes: 940555
num_examples: 227
download_size: 3976683
dataset_size: 9937348
- config_name: mcqa
features:
- name: id
dtype: string
- name: text
dtype: string
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: correct
dtype: string
- name: alternative
dtype: string
splits:
- name: train
num_bytes: 4327619
num_examples: 1798
- name: validation
num_bytes: 582526
num_examples: 225
- name: test
num_bytes: 551723
num_examples: 227
download_size: 2148096
dataset_size: 5461868
- config_name: paraphrases
features:
- name: question_AUT_EN_1
dtype: string
- name: question_AUT_EN_2
dtype: string
- name: answer_AUT_EN_1
dtype: string
- name: answer_AUT_EN_2
dtype: string
- name: question_AUT_PT_1
dtype: string
- name: question_AUT_PT_2
dtype: string
- name: answer_AUT_PT_1
dtype: string
- name: answer_AUT_PT_2
dtype: string
splits:
- name: train
num_bytes: 1175020
num_examples: 1806
download_size: 720519
dataset_size: 1175020
- config_name: pira_version1
features:
- name: id_qa
dtype: string
- name: corpus
dtype: int64
- name: question_en_origin
dtype: string
- name: question_pt_origin
dtype: string
- name: question_en_paraphase
dtype: string
- name: question_pt_paraphase
dtype: string
- name: answer_en_origin
dtype: string
- name: answer_pt_origin
dtype: string
- name: answer_en_validate
dtype: string
- name: answer_pt_validate
dtype: string
- name: eid_article_scopus
dtype: string
- name: text_excerpts_un_reports
dtype: string
- name: question_generic
dtype: bool
- name: answer_in_text
dtype: bool
- name: answer_difficulty
dtype: float64
- name: question_meaningful
dtype: float64
- name: answer_equivalent
dtype: float64
- name: question_type
dtype: string
splits:
- name: train
num_bytes: 3096316
num_examples: 2271
download_size: 1342133
dataset_size: 3096316
task_categories:
- question-answering
language:
- pt
- en
tags:
- climate
size_categories:
- 1K<n<10K
---
# Pirá: A Bilingual Portuguese-English Dataset for Question-Answering about the Ocean, the Brazilian coast, and climate change
Pirá is a crowdsourced reading comprehension dataset on the ocean, the Brazilian coast, and climate change.
QA sets are presented in both Portuguese and English, together with their corresponding textual context.
The dataset also contains human and automatic paraphrases for questions and answers, as well as a number of qualitative assessments.
The original paper was published at CIKM'21 and can be found [here](https://dl.acm.org/doi/pdf/10.1145/3459637.3482012).
As a subsequent project, we have produced a curated version of the dataset, which we refer to as Pirá 2.0.
In this step, we have also defined a number of benchmarks and reported the corresponding baselines.
This is the version that we make available at HuggingFace.
Pirá 2.0's preprint is available in [Arxiv](https://arxiv.org/abs/2309.10945).
Pirá is, to the best of our knowledge, the first QA dataset with supporting texts in Portuguese, and, perhaps more importantly,
the first bilingual QA dataset that includes Portuguese as one of its languages.
Pirá is also the first QA dataset in Portuguese with unanswerable questions so as to allow the study of answer triggering.
Finally, it is the first QA dataset that tackles scientific knowledge about the ocean, climate change, and marine biodiversity.
More information on the methodology, dataset versions, and benchmarks can be found on the project's [Github page](https://github.com/C4AI/Pira/).
You can also find there the Multiple-Choice version of Pirá.
# Dataset
The dataset is split into train, validation, and test sets.
| Split | Size | #QAs |
|---|---|---|
| Training | 80% | 1806 |
| Validation | 10% | 225 |
| Test | 10% | 227 |
| Full dataset | 100% | 2258 |
Above is an example of a question-answer set from Pirá:
```
{
'id_qa': 'B2142',
'corpus": 2,
'question_en_origin': 'What are the proportion of men and women employed in the fishery sector worlwide?',
'question_pt_origin': 'Qual é a proporção de homens e mulheres empregados no setor pesqueiro em todo o mundo?',
'question_en_paraphase': 'Which share of the fishery sector workers of the world are women?',
'question_pt_paraphase': 'Qual parcela dos trabalhadores do setor da pesca no mundo são mulheres?',
'answer_en_origin': '85 per cent men and 15 per cent women.',
'answer_pt_origin': '85 por cento homens e 15 por cento mulheres.',
'answer_en_validate': 'It is estimated that more than fifteen per cent of the fishing sector workers are women.',
'answer_pt_validate': 'Estima-se que mais de quinze por cento dos trabalhadores do setor da pesca são mulheres.',
'eid_article_scopus': '',
'text_excerpts_un_reports': 'Distribution of ocean benefits and disbenefits Developments in employment and income from fisheries and aquaculture The global harvest of marine capture fisheries has expanded rapidly since the early 1950s and is currently estimated to be about 80 million tons a year. That harvest is estimated to have a first (gross) value on the order of 113 billion dollars. Although it is difficult to produce accurate employment statistics, estimates using a fairly narrow definition of employment have put the figure of those employed in fisheries and aquaculture at 58.3 million people (4.4 per cent of the estimated total of economically active people), of which 84 per cent are in Asia and 10 per cent in Africa. Women are estimated to account for more than 15 per cent of people employed in the fishery sector. Other estimates, probably taking into account a wider definition of employment, suggest that capture fisheries provide direct and indirect employment for at least 120 million persons worldwide. Small-scale fisheries employ more than 90 per cent of the world’s capture fishermen and fish workers, about half of whom are women. When all dependants of those taking full- or part-time employment in the full value chain and support industries (boatbuilding, gear construction, etc.) of fisheries and aquaculture are included, one estimate concludes that between 660 and 820 million persons have some economic or livelihood dependence on fish capture and culture and the subsequent direct value chain. No sound information appears to be available on the levels of death and injury of those engaged in capture fishing or aquaculture, but capture fishing is commonly characterized as a dangerous occupation. Over time, a striking shift has occurred in the operation and location of capture fisheries. In the 1950s, capture fisheries were largely undertaken by developed fishing States. Since then, developing countries have increased their share. As a broad illustration, in the 1950s, the southern hemisphere accounted for no more than 8 per cent of landed values. By the last decade, the southern hemisphere’s share had risen to 20 per cent. In 2012, international trade represented 37 per cent of the total fish production in value, with a total export value of 129 billion dollars, of which 70 billion dollars (58 per cent) was exports by developing countries. Aquaculture is responsible for the bulk of the production of seaweeds. Worldwide, reports show that 24.9 million tons was produced in 2012, valued at about 6 billion dollars. In addition, about 1 million tons of wild seaweed were harvested. Few data were found on international trade in seaweeds, but their culture is concentrated in countries where consumption of seaweeds is high.',
'question_generic': false,
'answer_in_text': true,
'answer_difficulty': 1,
'question_meaningful': 5,
'answer_equivalent': 5,
'question_type': 'None of the above'
}
```
# Automatic Paraphrases
As we have only generated automatic paraphrases for questions and answers in the train set, they had to be saved in a different Dataset file.
To download the automatic paraphrases, just run:
```
paraphrases = load_dataset("paulopirozelli/pira", "paraphrases")
```
# Multiple Choice Question Answering
We have also developed a multiple choice question answering version of Pirá 2.0.
To download the automatic paraphrases, just run:
```
mcqa = load_dataset("paulopirozelli/pira", "mcqa")
```
Above is an example of a question-answer set from Pirá:
```
{
'id_qa': 'A1582',
'corpus': 1,
'question_en_origin': 'In the estuary, with marine influence, what was associated to deep areas with sandy sediment?',
'question_pt_origin': 'No estuário, com influência marinha, o que foi associado a áreas profundas com sedimento arenoso?',
'question_en_paraphase': 'What was discovered in estuary under deep areas with sand sediment and marine influence?',
'question_pt_paraphase': 'O que foi descoberto no estuário sob áreas profundas com sedimento arenoso e influência marítima?',
'answer_en_origin': 'The Laryngosigma lactea and Pyrgo oblonga foraminifera species.',
'answer_pt_origin': 'As espécies Laryngosigma lactea e Pyrgo oblonga de foraminíferos.',
'answer_en_validate': 'The species Laryngosigma lactea and Pyrgo oblonga.',
'answer_pt_validate': 'A espécie Laryngosigma lactea e Pyrgo oblonga.',
'eid_article_scopus': '2-s2.0-85092100205',
'text_excerpts_un_reports': None,
'question_generic': False,
'answer_in_text': True,
'answer_difficulty': 4.0,
'question_meaningful': 5.0,
'answer_equivalent': 4.0,
'question_type': 'Who'
}
```
# Pirá 1.0
You can also access the original Pirá dataset. Just run:
```
pira1 = load_dataset("paulopirozelli/pira", "pira_version1")
``` |
jeanai4/tacatalog | 2023-09-25T13:26:02.000Z | [
"region:us"
] | jeanai4 | null | null | null | 0 | 17 | Entry not found |
tyzhu/squad_title_train_10_eval_10 | 2023-09-26T03:54:02.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 238057
num_examples: 150
- name: validation
num_bytes: 60056
num_examples: 48
download_size: 72691
dataset_size: 298113
---
# Dataset Card for "squad_title_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_baseline_v3_train_10_eval_10 | 2023-09-26T08:02:24.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 65136
num_examples: 58
- name: validation
num_bytes: 61979
num_examples: 68
download_size: 0
dataset_size: 127115
---
# Dataset Card for "squad_baseline_v3_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_context_v4_train_30_eval_10 | 2023-09-26T09:49:11.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 299513
num_examples: 159
- name: validation
num_bytes: 80830
num_examples: 50
download_size: 113342
dataset_size: 380343
---
# Dataset Card for "squad_context_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TrainingDataPro/customers-reviews-on-banks | 2023-09-26T11:08:32.000Z | [
"task_categories:text-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"region:us"
] | TrainingDataPro | null | null | null | 1 | 17 | ---
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
language:
- en
tags:
- code
- finance
---
# Customers Reviews on Banks ⭐️
The Reviews on Banks Dataset is a comprehensive collection of **20,000** the most recent customer reviews on **48** US banks.
This dataset containing diverse reviews on multiple banks, can be useful for *sentiment analysis, assessing geographical variations in customer satisfaction, and exploring customer preferences through textual data*.
Understanding customer sentiments and preferences helps **banks** improve their services and address any issues raised by customers in their reviews.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=customers-reviews-on-banks) to discuss your requirements, learn about the price and buy the dataset.
# Content
For each item, we extracted:
- **author**: name of the reviewer,
- **date**: date of the review,
- **location**: location of the reviewer,
- **bank**: bank which is reviewed
- **star**: number of stars given to the bank by the reviewer,
- **text**: text of the review,
- **like**: number of likes on the review
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=customers-reviews-on-banks) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
MonKira/03F_IEMOCAP | 2023-09-27T09:48:29.000Z | [
"region:us"
] | MonKira | null | null | null | 0 | 17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: label
dtype: int64
- name: input_values
sequence: float32
splits:
- name: train
num_bytes: 1462828704
num_examples: 4995
- name: val
num_bytes: 147505532
num_examples: 536
download_size: 1220025832
dataset_size: 1610334236
---
# Dataset Card for "03F_IEMOCAP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MLNTeam-Unical/NFT-70M_image | 2023-10-02T16:51:33.000Z | [
"task_categories:time-series-forecasting",
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_categories:image-c... | MLNTeam-Unical | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: emb
sequence: float32
splits:
- name: train
num_bytes: 585722532
num_examples: 189923
download_size: 703210305
dataset_size: 585722532
size_categories:
- 10M<n<100M
license: cc-by-nc-4.0
task_categories:
- time-series-forecasting
- text-classification
- feature-extraction
- text-generation
- zero-shot-classification
- text2text-generation
- sentence-similarity
- image-classification
- image-to-text
- text-to-image
- text-retrieval
language:
- en
tags:
- Non-fungible Tokens
- Crypto
- Web3
- Art
- Multimodal Learning
pretty_name: NFT-70M_image
---
# Dataset Card for "NFT-70M_image"
## Dataset summary
The *NFT-70M_image* dataset is a companion for our released [**NFT-70M_transactions**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_transactions) dataset,
which is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from [OpenSea](https://opensea.io).
As we also reported in the "Data anonymization" section of the dataset card of *NFT-70M_transactions*,
the URLs of NFT images data were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings)
of the image contents obtained via the [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) neural network model.
*Note about this dataset version: embedded-image data in this dataset only include jpg and png formats and correspond to the Collection headers (i.e., collection_image field in the NFT-70M_transactions dataset).
Upcoming versions will include all NFT embedded-image data.*
## Ethical use of data and informed consent
This data repository is made available for research and informational purposes only.
Any finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists.
*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*
## Terms of Usage
Please cite the following papers in any research product whose findings are based on the data provided within this repository:
- L. La Cava, D. Costa, A. Tagarelli: SONAR: Web-based Tool for Multimodal Exploration of Non-Fungible Token Inspiration Networks. In: Proc. ACM SIGIR 2023. Taipei, Taiwan, July 23-27 2023. DOI: https://doi.org/10.1145/3539618.3591821
- L. La Cava, D. Costa, A. Tagarelli: Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens. CoRR abs/2303.17031 (2023). DOI: https://doi.org/10.48550/arXiv.2303.17031
- D. Costa, L. La Cava, A. Tagarelli: Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction. In: Proc. ACM WebConf 2023, pp. 1875-1885. Austin, TX, USA, 30 April 2023 – 4 May 2023. DOI: https://doi.org/10.1145/3543507.3583520
Data within this repository were fetched using the REST APIs provided by OpenSea. You should also acknowledge [OpenSea API]("https://docs.opensea.io/reference/api-overview).
## Liability statement
The authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository.
Users of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:
(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards;
(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.
The authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset.
*By accessing and using this dataset, users acknowledge and accept this disclaimer.* |
ContextualAI/test | 2023-09-28T14:25:48.000Z | [
"region:us"
] | ContextualAI | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: query
dtype: string
splits:
- name: validation
num_bytes: 1372000
num_examples: 1000
download_size: 9939
dataset_size: 1372000
---
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jouyang/clevr_1000 | 2023-09-30T06:58:33.000Z | [
"license:mit",
"region:us"
] | jouyang | null | null | null | 0 | 17 | ---
license: mit
---
|
yashnbx/l27b-E02-b05-0584-3 | 2023-09-30T10:22:11.000Z | [
"region:us"
] | yashnbx | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 4032267
num_examples: 584
download_size: 662022
dataset_size: 4032267
---
# Dataset Card for "l27b-E02-b05-0584-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hanifabdlh/quac-irza-ph-indo | 2023-10-02T02:05:17.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 6558699
num_examples: 9054
download_size: 3174472
dataset_size: 6558699
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-irza-ph-indo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hanifabdlh/quac-ichsan-alpacagpt4-indo | 2023-10-02T02:05:40.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 43177982
num_examples: 49969
download_size: 22784127
dataset_size: 43177982
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-ichsan-alpacagpt4-indo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Arbisoft/arbichat | 2023-10-02T17:12:46.000Z | [
"region:us"
] | Arbisoft | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: chunk-id
dtype: int64
- name: chunk
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1070206
num_examples: 1077
download_size: 452290
dataset_size: 1070206
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arbichat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ashiyakatuka11/en_es_combo_dataset | 2023-10-03T12:19:15.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | null | 0 | 17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Session_ID
dtype: float64
- name: 'Speaker '
dtype: string
- name: UserID
dtype: string
- name: prev_Utterance
dtype: string
- name: Utterance
dtype: string
- name: prevUtt_TAG
dtype: string
- name: TAG
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: 'Corpus Utterance #'
dtype: float64
- name: 'Session Utterance #'
dtype: string
- name: Time
dtype: string
- name: User
dtype: string
- name: Session ID
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1348026
num_examples: 7684
- name: test
num_bytes: 337648
num_examples: 1922
download_size: 595953
dataset_size: 1685674
---
# Dataset Card for "en_es_combo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sbarham/megawika-test | 2023-10-03T17:22:49.000Z | [
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:af",
"language:ar",
"language:az",
"language:bn",
"language:cs",
"language:de",
"language:en",
"language:e... | sbarham | MegaWika is a multi- and crosslingual text dataset containing 30 million
Wikipedia passages with their scraped and cleaned web citations. The
passages span 50 Wikipedias in 50 languages, and the articles in which
the passages were originally embedded are included for convenience. Where
a Wikipedia passage is in a non-English language, an automated English
translation is provided. Furthermore, nearly 130 million English
question/answer pairs were extracted from the passages, and FrameNet events
occurring in the passages are detected using the LOME FrameNet parser. | @article{barham2023megawika,
title={MegaWika: Millions of reports and their sources across 50 diverse languages},
author={Barham, Samuel and Weller, Orion and
Yuan, Michelle and Murray, Kenton and
Yarmohammadi, Mahsa and Jiang, Zhengping and
Vashishtha, Siddharth and Martin, Alexander and
Liu, Anqi and White, Aaron Steven and
Boyd-Graber, Jordan and Van Durme, Benjamin
},
journal={INSERT ARXIV PREPRINT ID HERE},
year={2023}
} | null | 0 | 17 | ---
license: cc-by-sa-4.0
task_categories:
- summarization
- question-answering
- text-generation
- text2text-generation
language:
- af
- ar
- az
- bn
- cs
- de
- en
- es
- et
- fa
- fi
- fr
- ga
- gl
- gu
- he
- hi
- hr
- id
- it
- ja
- ka
- kk
- km
- ko
- lt
- lv
- mk
- ml
- mn
- mr
- my
- ne
- nl
- pl
- ps
- pt
- ro
- ru
- si
- sl
- sv
- ta
- th
- tr
- uk
- ur
- vi
- xh
- zh
pretty_name: MegaWika
size_categories:
- 10M<n<100M
---
# Dataset Card for MegaWika
## Dataset Description
- **Homepage:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
- **Repository:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
- **Paper:** [Coming soon]
- **Leaderboard:** [Coming soon]
- **Point of Contact:** [Samuel Barham](samuel.barham@jhuapl.edu)
### Dataset Summary
MegaWika is a multi- and crosslingual text dataset containing 30 million Wikipedia passages with their scraped and cleaned web citations. The passages span
50 Wikipedias in 50 languages, and the articles in which the passages were originally embedded are included for convenience. Where a Wikipedia passage is in a
non-English language, an automated English translation is provided. Furthermore, nearly 130 million English question/answer pairs were extracted from the
passages, and FrameNet events occurring in the passages are detected using the [LOME](https://aclanthology.org/2021.eacl-demos.19.pdf) FrameNet parser.
<!---
To get a feel for the dataset -- its structure, content, strengths and weaknesses -- you may visit the [dataset viewer](https://huggingface.co/spaces/hltcoe/megawika)
we have set up as a HuggingFace Space. It allows the curious visitor to explore a small set of examples spread across a number of the dataset's constituent languages.
-->
### Dataset Creation
The pipeline through which MegaWika was created is complex, and is described in more detail in the paper (linked above),
but the following diagram illustrates the basic approach.

### Supported Tasks and Leaderboards
MegaWika is meant to support research across a variety of tasks, including report generation, summarization, information retrieval, question answering, etc.
### Languages
MegaWika is divided by Wikipedia language. There are 50 languages, including English, each designated by their 2-character ISO language code:
- `af`: Afrikaans
- `ar`: Arabic
- `az`: Azeri (Azerbaijani)
- `bn`: Bengali
- `cs`: Czech
- `de`: German (Deutsch)
- `en`: English
- `es`: Spanish (Español)
- `et`: Estonian
- `fa`: Farsi (Persian)
- `fi`: Finnish
- `fr`: French
- `ga`: Irish (Gaelic)
- `gl`: Galician
- `gu`: Gujarati
- `he`: Hebrew
- `hi`: Hindi
- `hr`: Hungarian
- `id`: Indonesian
- `it`: Italian
- `ja`: Japanese
- `ka`: Georgian (Kartvelian/Kartlian)
- `kk`: Kazakh
- `km`: Khmer
- `ko`: Korean
- `lt`: Lithuanian
- `lv`: Latvian
- `mk`: Macedonian (Makedonski)
- `ml`: Malay (Malayalam)
- `mn`: Mongolian
- `mr`: Marathi
- `my`: Burmese (Myanmar language)
- `ne`: Nepali
- `nl`: Dutch (Nederlands)
- `pl`: Polish
- `ps`: Pashto
- `pt`: Portuguese
- `ro`: Romanian
- `ru`: Russian
- `si`: Sinhalese (Sri Lankan language)
- `sl`: Slovenian
- `sv`: Swedish (Svenska)
- `ta`: Tamil
- `th`: Thai
- `tr`: Turkish
- `uk`: Ukrainian
- `ur`: Urdu
- `vi`: Vietnamese
- `xh`: Xhosa
- `zh`: Chinese (Zhōng wén)
## Dataset Structure
The dataset is divided by language, and the data for each of the 50 languages is further chunked into discrete JSON lines files.
Each line of these files -- we'll call such a line an **instance** -- contains the data extracted from a single Wikipedia article.
### Data Instances
Each instance contains the text of the seed Wikipedia article, along with a list of **entries**. Each entry consists basically in
an extracted Wikipedia passage, the URL and scraped text of the web source it cites, a list of questions/answer pairs extracted from the passage,
and a framenet parse of the passage. Where the passage is from a non-English Wikipedia, a machine translation into English is also provided.
### Data Fields
The detailed structure of an instance is as follows:
```
{
"article_title": <string : title of original Wikipedia article>
"article_text": <string : text of Wikipedia article>
"entries": [
# Wiki Passage
"id": <string : passage ID>
"passage": {
"text": <string : text of passage in English (possibly via MT)>
"parse": <list of dict : FrameNet parse of English passage text>
"en_tokens": <dict : tokenization of passage in English>
"lang_tokens": <dict : tokenization of original non-English passage>
"en_lang_token_map": <dict : alignment mapping between English and original language token indices>
}
# MT
"original": <string : original language passage>
"original_sents": <list of string : sentencized original language passage>
"translation": <string : machine translation of passage>
"translation_sents": <list of string : sentencized machine translation of passage>
"translation_probs": <list of float : log prob of machine translation by sentence, where available>
"repetitious_translation": <string \in ("true", "false") : automated judgment on whether machine translation is pathologically repetitious>
"source_lang": <string : language ID, 2-character ISO code>
# Source
"source_url": <string : URL of the cited web source>
"source_text": <string : content extracted from the scrape of the source URL>
# Question/Answer Pairs
"qa_pairs": [
...
{
"question": <string : generated question>
"passage_id": <string : passage ID>
"en_answer": <string : English answer>
"lang_answer": <string : aligned original language answer>
"frames": [
...
{
"frame": <string : frame triggered by the question>
"argument": <string : detected frame arguments>
}
...
]
# NB: answer matches can be empty, in the case no matching span exists
"en_matches_in_source": <list of int : start and end index of the English language-answer token(s) in the source document>
"en_match_in_passage": <list of int : start and end index of the English language-answer token(s) in the English language translation of the passage>
"lang_matches_in_source": <list of int : start and end index of the original language-answer token(s) in the source document>
"lang_match_in_passage": <list of int : start and end index of the original language-answer token(s) in the original language passage>
"passage": <list of string : sentencized view of the passage>
"en_answer_tokens": <list of string>
"match_disambiguated_question": <string : disambiguated version of question obtained by matching pronouns with article title (noisy but often helpful)>
}
...
]
]
}
```
English language instances differ not in structure but in content;
1. Fields in the block labeled "MT" above are naturally null (that is, they are set to falsy values in Python -- specifically `None`)
2. Since the Wiki passage only exists in English, and has no corresponding non-English "original language" version, answer spans also necessarily have only an English-language version (and no non-English "original-language" version. Therefore, fields in the `qa_pairs` block beginning with `lang_` are set to null/falsy values in Python (in this case, empty lists).
### Data Splits
MegaWika is currently split only by language, as each task will imply its own approach to filtering, sampling, downselecting, and splitting into train/test splits.
<!---
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
-->
## Licensing and Takedown
MegaWika 1.0 consists in part of documents scraped from across the web (based on citations linked in Wikipedia articles.)
We do not own any of the scraped text nor do we claim copyright: text drawn from Wikipedia citations are meant for research use in algorithmic design and model training.
We release this dataset and all its contents under CC-BY-SA-4.0.
### Notice and Takedown Policy:
*NB*: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
- Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
- Clearly identify the copyrighted work claimed to be infringed.
- Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact the authors.
*Take down*: We will comply to legitimate requests by removing the affected sources from the next release of the dataset.
## Additional Information
### Dataset Curators
Released and maintained by the Johns Hopkins University Human Language Technology Center of Excellence (JHU/HLTCOE).
You can contact one the MegaWika authors, including [Samuel Barham](mailto:samuel.barham@jhuapl.edu), [Orion Weller](mailto:oweller2@jhu.edu),
and [Ben van Durme](mailto:vandurme@jhu.edu) with questions.
### Licensing Information
Released under the [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license.
### Citation Information
```
@misc{barham2023megawika,
title={MegaWika: Millions of reports and their sources across 50 diverse languages},
author={Samuel Barham and and Weller and Michelle Yuan and Kenton Murray and Mahsa Yarmohammadi and Zhengping Jiang and Siddharth Vashishtha and Alexander Martin and Anqi Liu and Aaron Steven White and Jordan Boyd-Graber and Benjamin Van Durme},
year={2023},
eprint={2307.07049},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
### Contributions
[More Information Needed]
--> |
shossain/govreport-qa-5-2048 | 2023-10-03T17:47:39.000Z | [
"region:us"
] | shossain | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 133180
num_examples: 5
download_size: 45937
dataset_size: 133180
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-5-2048"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NikiTricky/digital-bg | 2023-10-05T15:45:49.000Z | [
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:bg",
"region:us"
] | NikiTricky | null | null | null | 0 | 17 | ---
task_categories:
- text-generation
- summarization
- text-classification
language:
- bg
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: "posts.json"
---
# Digital.bg articles |
Intuit-GenSRF/jquiros-suicide | 2023-10-05T00:50:50.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 165623664
num_examples: 232074
download_size: 100436023
dataset_size: 165623664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jquiros-suicide"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/toxigen-train | 2023-10-05T01:45:00.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 138945158
num_examples: 250951
download_size: 3070653
dataset_size: 138945158
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "toxigen-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jayashri710/llama2-cricketdata | 2023-10-06T09:50:46.000Z | [
"region:us"
] | jayashri710 | null | null | null | 0 | 17 | Entry not found |
kibru/e2e | 2023-10-05T19:29:29.000Z | [
"license:mit",
"region:us"
] | kibru | null | null | null | 0 | 17 | ---
license: mit
---
|
towhid/aesir-test | 2023-10-06T20:29:56.000Z | [
"region:us"
] | towhid | null | null | null | 0 | 17 | Entry not found |
JordanTallon/political_bias | 2023-10-07T21:43:25.000Z | [
"region:us"
] | JordanTallon | null | null | null | 0 | 17 | Entry not found |
Alegzandra/REDv2_EN_ANN | 2023-10-09T12:59:38.000Z | [
"license:apache-2.0",
"region:us"
] | Alegzandra | null | null | null | 0 | 17 | ---
license: apache-2.0
---
|
akshaysaju9660/llama_tests | 2023-10-08T12:54:22.000Z | [
"license:llama2",
"region:us"
] | akshaysaju9660 | null | null | null | 0 | 17 | ---
license: llama2
---
|
Yukang/LongAlpaca-12k | 2023-10-10T02:56:29.000Z | [
"arxiv:2309.12307",
"region:us"
] | Yukang | null | null | null | 13 | 17 | # LongLoRA and LongAlpaca for Long-context LLMs
[](https://huggingface.co/Yukang)
[](https://github.com/dvlab-research/LongLoRA)
[](https://huggingface.co/datasets/Yukang/LongAlpaca-12k)
[](https://arxiv.org/abs/2309.12307)
[](https://github.com/dvlab-research/LongLoRA/blob/main/LICENSE)
[](https://github.com/dvlab-research/LongLoRA/blob/main/DATA_LICENSE)
[](https://github.com/dvlab-research/LongLoRA/blob/main/WEIGHT_LICENSE)
For detailed usage and codes, please visit the [Github project](https://github.com/dvlab-research/LongLoRA).
## TABLE OF CONTENTS
1. [News](#news)
2. [Examples](#examples)
3. [Highlights](#highlights)
4. [How to contribute](#how-to-contribute)
5. [Requirements](#usage-requirements)
6. [Installation and quick guide](#installation-and-quick-guide)
7. [LongAlpaca Data](#longalpaca-data)
8. [Models](#models)
9. [Training](#training)
10. [Evaluation](#evaluation)
11. [Demo](#demo)
12. [Data Generation via Pdf2Text](#data-generation-via-pdf2text)
13. [Citation](#citation)
14. [Acknowledgement](#acknowledgement)
15. [License](#license)
## News
- [x] [2023.10.8] **We release the long instruction-following dataset**, [LongAlpaca-12k](https://huggingface.co/datasets/Yukang/LongAlpaca-12k) and **the corresponding models**, [LongAlpaca-7B](https://huggingface.co/Yukang/LongAlpaca-7B), [LongAlpaca-13B](https://huggingface.co/Yukang/LongAlpaca-13B), and [LongAlpaca-70B](https://huggingface.co/Yukang/LongAlpaca-70B).
- (*The previous sft models*, [Llama-2-13b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) and [Llama-2-70b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft), *have been depreciated*.)
- [x] [2023.10.3] We add support GPTNeoX models. Please refer to this [PR](https://github.com/dvlab-research/LongLoRA/pull/32) for usage. Thanks for @naubull2 for this contribution.
- [x] [2023.9.22] We release all our fine-tuned [models](https://huggingface.co/Yukang), including **70B-32k models**, [LLaMA2-LongLoRA-70B-32k](https://huggingface.co/Yukang/Llama-2-70b-longlora-32k), [LLaMA2-LongLoRA-7B-100k](https://huggingface.co/Yukang/Llama-2-7b-longlora-100k-ft). Welcome to check them out!
- [x] [2023.9.22] We release [Paper](http://arxiv.org/abs/2309.12307) and this GitHub repo, including training and evaluation code.
**LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models [[Paper](http://arxiv.org/abs/2309.12307)]** <br />
[Yukang Chen](https://scholar.google.com/citations?user=6p0ygKUAAAAJ&hl=en),
[Shengju Qian](https://scholar.google.com/citations?user=QNnWmasAAAAJ),
[Haotian Tang](https://scholar.google.com/citations?user=WxL13BAAAAAJ&hl),
[Xin Lai](https://scholar.google.com/citations?user=tqNDPA4AAAAJ&hl=zh-CN),
[Zhijian Liu](https://scholar.google.com/citations?user=3coYSTUAAAAJ&hl=en),
[Song Han](https://scholar.google.com/citations?user=E0iCaa4AAAAJ&hl=zh-CN),
[Jiaya Jia](https://scholar.google.com/citations?user=XPAkzTEAAAAJ&hl=en)<br />
## Highlights
1. In LongLoRA approach, The proposed shifted short attention is easy to implement, compatible with Flash-Attention, and is not required during inference.
2. We released all our models, including models from 7B to 70B, context length from 8k to 100k, including [LLaMA2-LongLoRA-7B-100k](https://huggingface.co/Yukang/Llama-2-7b-longlora-100k-ft), [LLaMA2-LongLoRA-13B-64k](https://huggingface.co/Yukang/Llama-2-13b-longlora-64k), and [LLaMA2-LongLoRA-70B-32k](https://huggingface.co/Yukang/Llama-2-70b-longlora-32k).
3. We built up a long-context instruction-following dataset, [LongAlpaca-12k](#longalpaca-data). We released the corresponding [LongAlpaca-7B](https://huggingface.co/Yukang/LongAlpaca-7B), [LongAlpaca-13B](https://huggingface.co/Yukang/LongAlpaca-13B) and [LongAlpaca-70B](https://huggingface.co/Yukang/LongAlpaca-70B) models. To our best knowledge, this is the first open-sourced long-context 70B model.
## How to Contribute
- Make sure to have git installed.
- Create your own [fork](https://github.com/dvlab-research/LongLoRA/fork) of the project.
- Clone the repository on your local machine, using git clone and pasting the url of this project.
- Read both the `Requirements` and `Installation and Quick Guide` sections below.
- Commit and push your changes.
- Make a pull request when finished modifying the project.
## Usage Requirements
To download and use the [pre-trained weights](#pre-trained-weights) you will need:
1. Hugging Face (HF) account with valid email. Note, the email used for HF must alse be used for the license agreement.
2. Accept the Meta [license and acceptable use policy](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Installation and Quick Guide
To install and run the application:
1. [Fork this repo](https://github.com/dvlab-research/LongLoRA/fork) on github
2. Clone the repository on your local machine, using git clone and pasting the url of this project.
3. Run the following code:
```
pip install -r requirements.txt
pip install flash-attn --no-build-isolation
```
4. Use either a [Released model](#released-models) or [Fine tune](#fine-tuning) a model to fit your preferences.
5. Test your model by chat.
6. Deploy your own demo.
## LongAlpaca Data
LongAlpaca-12k contains 9k long QA data that we collected and 3k short QA sampled from the original [Alpaca data](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json). This is to avoid the case that the model might degrade at short instruction following. The data we collect contains various types and amounts as the following figure.
| Data | Short QA | Long QA | Total | Download |
|:---------------|----------|----------|----------|----------|
| LongAlpaca-12k | 3k | 9k | 12k | [Link](https://huggingface.co/datasets/Yukang/LongAlpaca-12k) |
Following the original Alpaca format, our Long QA data uses the following prompts for fine-tuning:
- `instruction`: `str`, describes the task the model should perform. For example, to answer a question after reading a book section or paper. We vary the contents and questions to make instructions diverse.
- `output`: `str`, the answer to the instruction.
We did not use the `input` format in the Alpaca format for simplicity.
## Models
### Models with supervised fine-tuning
| Model | Size | Context | Train | Link |
|:---------------|------|---------|---------|-----------------------------------------------------------------------------------------------------------------------|
| LongAlpaca-7B | 7B | 32768 | Full FT | [Model](https://huggingface.co/Yukang/LongAlpaca-7B) |
| LongAlpaca-13B | 13B | 32768 | Full FT | [Model](https://huggingface.co/Yukang/LongAlpaca-13B) |
| LongAlpaca-70B | 70B | 32768 | LoRA+ | [Model](https://huggingface.co/Yukang/LongAlpaca-70B) [(LoRA-weight)](https://huggingface.co/Yukang/LongAlpaca-70B-lora) |
### Models with context extension via fully fine-tuning
| Model | Size | Context | Train | Link |
|:----------------------------|------|---------|-------|-------------------------------------------------------------------|
| Llama-2-7b-longlora-8k-ft | 7B | 8192 | Full FT | [Model](https://huggingface.co/Yukang/Llama-2-7b-longlora-8k-ft) |
| Llama-2-7b-longlora-16k-ft | 7B | 16384 | Full FT | [Model](https://huggingface.co/Yukang/Llama-2-7b-longlora-16k-ft) |
| Llama-2-7b-longlora-32k-ft | 7B | 32768 | Full FT | [Model](https://huggingface.co/Yukang/Llama-2-7b-longlora-32k-ft) |
| Llama-2-7b-longlora-100k-ft | 7B | 100000 | Full FT | [Model](https://huggingface.co/Yukang/Llama-2-7b-longlora-100k-ft) |
| Llama-2-13b-longlora-8k-ft | 13B | 8192 | Full FT | [Model](https://huggingface.co/Yukang/Llama-2-13b-longlora-8k-ft) |
| Llama-2-13b-longlora-16k-ft | 13B | 16384 | Full FT | [Model](https://huggingface.co/Yukang/Llama-2-13b-longlora-16k-ft) |
| Llama-2-13b-longlora-32k-ft | 13B | 32768 | Full FT | [Model](https://huggingface.co/Yukang/Llama-2-13b-longlora-32k-ft) |
### Models with context extension via improved LoRA fine-tuning
| Model | Size | Context | Train | Link |
|:----------------------------|------|---------|-------|---------------------------------------------------------------------|
| Llama-2-7b-longlora-8k | 7B | 8192 | LoRA+ | [LoRA-weight](https://huggingface.co/Yukang/Llama-2-7b-longlora-8k) |
| Llama-2-7b-longlora-16k | 7B | 16384 | LoRA+ | [LoRA-weight](https://huggingface.co/Yukang/Llama-2-7b-longlora-16k) |
| Llama-2-7b-longlora-32k | 7B | 32768 | LoRA+ | [LoRA-weight](https://huggingface.co/Yukang/Llama-2-7b-longlora-32k) |
| Llama-2-13b-longlora-8k | 13B | 8192 | LoRA+ | [LoRA-weight](https://huggingface.co/Yukang/Llama-2-13b-longlora-8k) |
| Llama-2-13b-longlora-16k | 13B | 16384 | LoRA+ | [LoRA-weight](https://huggingface.co/Yukang/Llama-2-13b-longlora-16k) |
| Llama-2-13b-longlora-32k | 13B | 32768 | LoRA+ | [LoRA-weight](https://huggingface.co/Yukang/Llama-2-13b-longlora-32k) |
| Llama-2-13b-longlora-64k | 13B | 65536 | LoRA+ | [LoRA-weight](https://huggingface.co/Yukang/Llama-2-13b-longlora-64k) |
| Llama-2-70b-longlora-32k | 70B | 32768 | LoRA+ | [LoRA-weight](https://huggingface.co/Yukang/Llama-2-70b-longlora-32k) |
| Llama-2-70b-chat-longlora-32k | 70B | 32768 | LoRA+ | [LoRA-weight](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k) |
## Training
### Pre-trained weights
We use LLaMA2 models as the pre-trained weights and fine-tune them to long context window sizes. Download based on your choices.
| Pre-trained weights |
|:-------------------------------------------------------------------------------------|
| [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) |
|[Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) |
| [Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf) |
| [Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) |
| [Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) |
| [Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) |
This project also supports GPTNeoX models as the base model architecture. Some candidate pre-trained weights may include [GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b), [Polyglot-ko-12.8B](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) and other variants.
### Fine-tuning
```
torchrun --nproc_per_node=8 fine-tune.py \
--model_name_or_path path_to/Llama-2-7b-hf \
--bf16 True \
--output_dir path_to_saving_checkpoints \
--cache_dir path_to_cache \
--model_max_length 8192 \
--use_flash_attn True \
--low_rank_training False \
--num_train_epochs 1 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 1000 \
--save_total_limit 2 \
--learning_rate 2e-5 \
--weight_decay 0.0 \
--warmup_steps 20 \
--lr_scheduler_type "constant_with_warmup" \
--logging_steps 1 \
--deepspeed "ds_configs/stage2.json" \
--tf32 True \
--max_steps 1000
```
- Please remember to change `path_to/Llama-2-7b-hf`, `path_to_saving_checkpoints`, `path_to_cache` to your own directory.
- Note that you can change `model_max_length` to other values.
- You could change `ds_configs/stage2.json` to `ds_configs/stage3.json` if you want.
- Please set `use_flash_attn` as `False` if you use V100 machines or do not install flash attention.
- You can set `low_rank_training` as `False` if you want to use fully fine-tuning. It will cost more GPU memory and slower, but the performance will be a bit better.
- When training is finished, to get the full model weight:
```
cd path_to_saving_checkpoints && python zero_to_fp32.py . pytorch_model.bin
```
### Supervised Fine-tuning
```
torchrun --nproc_per_node=8 supervised-fine-tune.py \
--model_name_or_path path_to_Llama2_chat_models \
--bf16 True \
--output_dir path_to_saving_checkpoints \
--model_max_length 32768 \
--use_flash_attn True \
--data_path LongAlpaca-12k.json \
--low_rank_training True \
--num_train_epochs 3 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 1000 \
--save_total_limit 2 \
--learning_rate 2e-5 \
--weight_decay 0.0 \
--warmup_steps 20 \
--lr_scheduler_type "constant_with_warmup" \
--logging_steps 1 \
--deepspeed "ds_configs/stage2.json" \
--tf32 True
```
- There is no need to make supervised fine-tuning upon the fine-tuned context extended models. It is all right to directly use base model as Llama2-chat models, as the amount of long instruction following data is enough for SFT.
- Our long instruction following data can be found in [LongAlpaca-12k.json](https://huggingface.co/datasets/Yukang/LongAlpaca-12k).
### Get trainable weights in low-rank training
In low-rank training, we set embedding and normalization layers as trainable. Please use the following line to extract the trainable weights `trainable_params.bin` from `pytorch_model.bin`
```
python3 get_trainable_weights.py --checkpoint_path path_to_saving_checkpoints --trainable_params "embed,norm"
```
### Merge LoRA Weight
Merge the LoRA weights of `pytorch_model.bin` and trainable parameters `trainable_params.bin`, save the resulting model into your desired path in the Hugging Face format:
```
python3 merge_lora_weights_and_save_hf_model.py \
--base_model path_to/Llama-2-7b-hf \
--peft_model path_to_saving_checkpoints \
--context_size 8192 \
--save_path path_to_saving_merged_model
```
For example,
```
python3 merge_lora_weights_and_save_hf_model.py \
--base_model /dataset/pretrained-models/Llama-2-7b-hf \
--peft_model /dataset/yukangchen/hf_models/lora-models/Llama-2-7b-longlora-8k \
--context_size 8192 \
--save_path /dataset/yukangchen/models/Llama-2-7b-longlora-8k-merged
```
## Evaluation
### Perplexity Validation
To evaluate a model that is trained in the low-rank setting, please set both `base_model` and `peft_model`. `base_model` is the pre-trained weight. `peft_model` is the path to the saved checkpoint, which should contain `trainable_params.bin`, `adapter_model.bin` and `adapter_config.json`. For example,
```
python3 eval.py --seq_len 8192 --context_size 8192 --batch_size 1 --base_model path_to/Llama-2-7b-hf --peft_model path_to_saving_checkpoints --data_path pg19/test.bin
```
To evaluate a model that is fully fine-tuned, you only need to set `base_model` as the path to the saved checkpoint, which should contain `pytorch_model.bin` and `config.json`. `peft_model` should be ignored.
```
python3 eval.py --seq_len 8192 --context_size 8192 --batch_size 1 --base_model path_to_saving_checkpoints --data_path pg19/test.bin
```
- Note that `--seq_len` is to set the sequence length for evaluation. `--context_size` is to set the context length of the model during fine-tuning. `--seq_len` should not be larger than `--context_size`.
- We have already tokenized the validation and test splits of PG19 and proof-pile dataset into `pg19/validation.bin`, `pg19/test.bin`, and `proof-pile/test_sampled_data.bin`, with the tokenizer of LLaMA. `proof-pile/test_sampled_data.bin` contains 128 documents that are randomly sampled from the total proof-pile test split. For each document, it has at least 32768 tokens. We also release the sampled ids in [proof-pile/test_sampled_ids.bin](https://drive.google.com/file/d/1cnzWODLRQYAd7HeugzLCIhaqzaLZv7J5/view?usp=share_link). You can download them from the links below.
| Dataset | Split | Link |
|:-----------|------------|--------------------------------------------------------------------------------------------------------------|
| PG19 | validation | [pg19/validation.bin](https://drive.google.com/file/d/1rbJvb0qRIf2mQoN2ON7S93TbTzMnlrN6/view?usp=share_link) |
| PG19 | test | [pg19/test.bin](https://drive.google.com/file/d/1QANDMdctpacPAYgS04adDXqByGEq-Ret/view?usp=share_link) |
| Proof-pile | test | [proof-pile/test_sampled_data.bin](https://drive.google.com/file/d/1bUI5lPDvrqzY_XXJJ2sSuvZx0Y9AZClE/view?usp=share_link) |
### Passkey Retrieval
We provide a manner to test the passkey retrieval accuracy. For example,
```
python3 passkey_retrivial.py \
--context_size 32768 \
--base_model path_to/Llama-2-7b-longlora-32k \
--max_tokens 32768 \
--interval 1000
```
- Note that the `context_size` is the context length during fine-tuning.
- `max_tokens` is maximum length for the document in passkey retrieval evaluation.
- `interval` is the interval during the document length increasing. It is a rough number because the document increases by sentences.
## Demo
### Local Inference
To chat with [Llama-2-13b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) or [Llama-2-70b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft), you need to run `merge_lora_weights_and_save_hf_model.py` first, and then:
```
python3 inference.py \
--base_model path_to_model \
--question $question \
--context_size $context_length \
--max_gen_len $max_gen_len \
--flash_attn True \
--material $material_content \
--material_type $material_type \
--material_title $material_title
```
To ask a question related to a book:
```
python3 inference.py \
--base_model /data/models/Llama-2-13b-chat-longlora-32k-sft \
--question "Why doesn't Professor Snape seem to like Harry?" \
--context_size 32768 \
--max_gen_len 512 \
--flash_attn True \
--material "materials/Harry Potter and the Philosophers Stone_section2.txt" \
--material_type "book" \
--material_title "Harry Potter and the Philosophers Stone"
```
Note that you can ignore `material_type` or `material_title`.
To ask a question related to a paper:
```
python3 inference.py \
--base_model /data/models/Llama-2-13b-chat-longlora-32k-sft \
--question "What are the main contributions and novelties of this work?" \
--context_size 32768 \
--max_gen_len 512 \
--flash_attn True \
--material "materials/paper1.txt" \
--material_type "paper"
```
### Online Demo
To deploy your own demo run
```
python3 demo.py \
--base_model path_to_model \
--context_size $context_size \
--max_gen_len $max_gen_len \
--flash_attn True
```
Example
```
python3 demo.py \
--base_model /data/models/Llama-2-13b-chat-longlora-32k-sft \
--context_size 32768 \
--max_gen_len 512 \
--flash_attn True
```
- Note that `flash_attn=True` will make the generation slow but save much GPU memory.
## Data Generation via Pdf2text
During our dataset collection, we convert paper and books from pdf to text. The conversion quality has a large influence on the final model quality. We think that this step is non-trivial. We release the tool for the pdf2txt conversion, in the folder `pdf2txt`. It is built upon `pdf2image`, `easyocr`, `ditod` and `detectron2`. Please refer to the [README.md](pdf2txt/README.md) in `pdf2txt` for more details.
## Citation
If you find this project useful in your research, please consider citing:
```
@article{longlora,
title={LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models},
author={Yukang Chen and Shengju Qian and Haotian Tang and Xin Lai and Zhijian Liu and Song Han and Jiaya Jia},
journal={arXiv:2309.12307},
year={2023}
}
```
```
@misc{long-alpaca,
author = {Yukang Chen and Shaozuo Yu and Shengju Qian and Haotian Tang and Xin Lai and Zhijian Liu and Song Han and Jiaya Jia},
title = {Long Alpaca: Long-context Instruction-following models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/dvlab-research/LongLoRA}},
}
```
## Acknowledgement
- This work is built upon the [LLaMA2](https://ai.meta.com/llama) as the pre-trained models.
- This work can also be built upon the [GPTNeoX-HF](https://huggingface.co/docs/transformers/model_doc/gpt_neox) which is based upon [EleutherAI/GPTNeoX](https://github.com/EleutherAI/gpt-neox) as the pre-trained model architecture.
- This work is based on [DeepSpeed](https://github.com/microsoft/DeepSpeed), [peft](https://github.com/huggingface/peft), and [Flash-Attention2](https://github.com/Dao-AILab/flash-attention) for acceleration.
- Some evaluation code is modified upon [Landmark Attention](https://github.com/epfml/landmark-attention).
- We use [LongChat](https://github.com/DachengLi1/LongChat) for the retrieval evaluation.
## License
- LongLoRA is licensed under the Apache License 2.0. This means that it requires the preservation of copyright and license notices.
- Data and weights are under CC-BY-NC 4.0 License. They are licensed for research use only, and allowed only non-commercial. Models trained using the dataset should not be used outside of research purposes. |
bbaw_egyptian | 2023-04-05T09:36:39.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:de",
"language:egy",
"language:en",
"license:cc-by-4.0",
"region:us"
] | null | This dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation
as used in the paper Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian
Hieroglyph. The data triples are extracted from the digital corpus of Egyptian texts compiled by
the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache". | @misc{OPUS4-2919,
title = {Teilauszug der Datenbank des Vorhabens "Strukturen und Transformationen des Wortschatzes der {\"a}gyptischen Sprache" vom Januar 2018},
institution = {Akademienvorhaben Strukturen und Transformationen des Wortschatzes der {\"a}gyptischen Sprache. Text- und Wissenskultur im alten {\"A}gypten},
type = {other},
year = {2018},
} | null | 5 | 16 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- de
- egy
- en
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: BbawEgyptian
dataset_info:
features:
- name: transcription
dtype: string
- name: translation
dtype: string
- name: hieroglyphs
dtype: string
splits:
- name: train
num_bytes: 18546162
num_examples: 100736
download_size: 35348686
dataset_size: 18546162
---
# Dataset Card for "bbaw_egyptian"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://edoc.bbaw.de/frontdoor/index/index/docId/2919](https://edoc.bbaw.de/frontdoor/index/index/docId/2919)
- **Repository:** [Github](https://phiwi.github.io/all.json)
- **Paper:** [Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph](https://zenodo.org/record/3524924)
- **Point of Contact:** [Philipp Wiesenbach](https://www.cl.uni-heidelberg.de/~wiesenbach/index.html)
- **Size of downloaded dataset files:** 35.65 MB
### Dataset Summary
This dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation as used in the paper [Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph](https://zenodo.org/record/3524924). The data triples are extracted from the [digital corpus of Egyptian texts](https://edoc.bbaw.de/frontdoor/index/index/docId/2919) compiled by the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache".
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The dataset consists of parallel triples of
- `hieroglyphs`: [Encoding of the hieroglyphs with the [Gardiner's sign list](https://en.wikipedia.org/wiki/Gardiner%27s_sign_list)
- `transcription`: Transliteration of the above mentioned hieroglyphs with a [transliteration scheme](https://en.wikipedia.org/wiki/Transliteration_of_Ancient_Egyptian)
- `translation`: Translation in mostly German language (with some English mixed in)
## Dataset Structure
The dataset is not divided into 'train', 'dev' and 'test' splits as it was not built for competitive purposes and we encourage all scientists to use individual partitioning schemes to suit their needs (due to the low resource setting it might be advisable to use cross validation anyway). The only available split 'all' therefore comprises the full 100,708 translation triples, 35,503 of which possess hieroglyphic encodings (the remaining 65,205 triples have empty `hieroglyph` entries).
### Data Instances
An example of a data triple looks the following way:
```
{
"transcription": "n rḏi̯(.w) gꜣ =j r dbḥ.t m pr-ḥḏ",
"translation": "I was not let to suffer lack in the treasury with respect to what was needed;",
"hieroglyphs": "D35 D21 -D37 G1&W11 -V32B A1 D21 D46 -D58 *V28 -F18 *X1 -A2 G17 [? *O2 *?]"
}
```
*Important*: Only about a third of the instance actually cover hieroglyphic encodings (the rest is the empty string `""`) as the leftover encodings have not yet been incorporated into the BBAW's project database.
### Data Fields
#### plain_text
- `transcription`: a `string` feature.
- `translation`: a `string` feature.
- `hieroglyphs`: a `string` feature.
### Data Splits
| name |all|
|----------|----:|
|plain_text|100708|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
The data source comes from the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache" which is compiling an extensively annotated digital corpus of Egyptian texts. Their [publication](https://edoc.bbaw.de/frontdoor/index/index/docId/2919) comprises an excerpt of the internal database's contents.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The corpus has not been preprocessed as we encourage every scientist to prepare the corpus to their desired needs. This means, that all textcritic symbols are still included in the transliteration and translation. This concerns the following annotations:
- `()`: defective
- `[]`: lost
- `{}`: surplus
- `〈〉`: omitted
- `⸢⸣`: damaged
- `⸮?`: unclear
- `{{}}`: erasure
- `(())`: above
- `[[]]`: overstrike
- `〈〈〉〉`: haplography
Their exists a similar sign list for the annotation of the hieroglyphic encoding. If you wish access to this list, please get in contact with the author.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
Source corpus:
```
@misc{OPUS4-2919,
title = {Teilauszug der Datenbank des Vorhabens "Strukturen und Transformationen des Wortschatzes der {\"a}gyptischen Sprache" vom Januar 2018},
institution = {Akademienvorhaben Strukturen und Transformationen des Wortschatzes der {\"a}gyptischen Sprache. Text- und Wissenskultur im alten {\"A}gypten},
type = {other},
year = {2018},
}
```
Translation paper:
```
@article{wiesenbach19,
title = {Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyphs},
author = {Wiesenbach, Philipp and Riezler, Stefan},
journal = {Proceedings of the International Workshop on Spoken Language Translation},
journal-abbrev = {IWSLT},
year = {2019},
url = {https://www.cl.uni-heidelberg.de/statnlpgroup/publications/IWSLT2019_v2.pdf}
}
```
### Contributions
Thanks to [@phiwi](https://github.com/phiwi) for adding this dataset. |
poleval2019_cyberbullying | 2023-01-25T14:42:46.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:unknown",
"region:us"
] | null | In Task 6-1, the participants are to distinguish between normal/non-harmful tweets (class: 0) and tweets
that contain any kind of harmful information (class: 1). This includes cyberbullying, hate speech and
related phenomena.
In Task 6-2, the participants shall distinguish between three classes of tweets: 0 (non-harmful),
1 (cyberbullying), 2 (hate-speech). There are various definitions of both cyberbullying and hate-speech,
some of them even putting those two phenomena in the same group. The specific conditions on which we based
our annotations for both cyberbullying and hate-speech, which have been worked out during ten years of research
will be summarized in an introductory paper for the task, however, the main and definitive condition to 1
distinguish the two is whether the harmful action is addressed towards a private person(s) (cyberbullying),
or a public person/entity/large group (hate-speech). | @proceedings{ogr:kob:19:poleval,
editor = {Maciej Ogrodniczuk and Łukasz Kobyliński},
title = {{Proceedings of the PolEval 2019 Workshop}},
year = {2019},
address = {Warsaw, Poland},
publisher = {Institute of Computer Science, Polish Academy of Sciences},
url = {http://2019.poleval.pl/files/poleval2019.pdf},
isbn = "978-83-63159-28-3"}
} | null | 1 | 16 | ---
annotations_creators:
- found
language_creators:
- found
language:
- pl
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
pretty_name: Poleval 2019 cyberbullying
dataset_info:
- config_name: task01
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 1104322
num_examples: 10041
- name: test
num_bytes: 109681
num_examples: 1000
download_size: 410001
dataset_size: 1214003
- config_name: task02
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
splits:
- name: train
num_bytes: 1104322
num_examples: 10041
- name: test
num_bytes: 109681
num_examples: 1000
download_size: 410147
dataset_size: 1214003
---
# Dataset Card for Poleval 2019 cyberbullying
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://2019.poleval.pl/index.php/tasks/task6
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Task 6-1: Harmful vs non-harmful
In this task, the participants are to distinguish between normal/non-harmful tweets (class: 0) and tweets that contain any kind of harmful
information (class: 1). This includes cyberbullying, hate speech and related phenomena. The data for the task is available now and can be
downloaded from the link provided below.
Task 6-2: Type of harmfulness
In this task, the participants shall distinguish between three classes of tweets: 0 (non-harmful), 1 (cyberbullying), 2 (hate-speech). There
are various definitions of both cyberbullying and hate-speech, some of them even putting those two phenomena in the same group. The specific
conditions on which we based our annotations for both cyberbullying and hate-speech, which have been worked out during ten years of research
will be summarized in an introductory paper for the task, however, the main and definitive condition to distinguish the two is whether the
harmful action is addressed towards a private person(s) (cyberbullying), or a public person/entity/large group (hate-speech).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- text: the provided tweet
- label: for task 6-1 the label can be 0 (non-harmful) or 1 (harmful)
for task 6-2 the label can be 0 (non-harmful), 1 (cyberbullying) or 2 (hate-speech)
### Data Splits
Train and Test
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@proceedings{ogr:kob:19:poleval,
editor = {Maciej Ogrodniczuk and Łukasz Kobyliński},
title = {{Proceedings of the PolEval 2019 Workshop}},
year = {2019},
address = {Warsaw, Poland},
publisher = {Institute of Computer Science, Polish Academy of Sciences},
url = {http://2019.poleval.pl/files/poleval2019.pdf},
isbn = "978-83-63159-28-3"}
}
```
### Contributions
Thanks to [@czabo](https://github.com/czabo) for adding this dataset. |
NYTK/HuSST | 2023-03-27T09:54:13.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:text-scoring",
"annotations_creators:found",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datase... | NYTK | null | null | null | 1 | 16 | ---
annotations_creators:
- found
language_creators:
- found
- expert-generated
language:
- hu
license:
- bsd-2-clause
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-scoring
- text-scoring
pretty_name: HuSST
---
# Dataset Card for HuSST
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuSST dataset](https://github.com/nytud/HuSST)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian version of the Stanford Sentiment Treebank. This dataset which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and re-annotating the original SST (Roemmele et al., 2011).
### Supported Tasks and Leaderboards
'sentiment classification'
'sentiment scoring'
### Language
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an id, a sentence and a sentiment label.
An example:
```
{
"Sent_id": "dev_0",
"Sent": "Nos, a Jason elment Manhattanbe és a Pokolba kapcsán, azt hiszem, az elkerülhetetlen folytatások ötletlistájáról kihúzhatunk egy űrállomást 2455-ben (hé, ne lődd le a poént).",
"Label": "neutral"
}
```
### Data Fields
- Sent_id: unique id of the instances;
- Sent: the sentence, translation of an instance of the SST dataset;
- Label: "negative", "neutral", or "positive".
### Data Splits
HuSST has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of instances in the split |
|---------------|----------------------------------|
| train | 9344 |
| validation | 1168 |
| test | 1168 |
The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the content of the SST dataset (only the whole sentences were used). Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.
### Annotations
#### Annotation process
The translated sentences were annotated by three human annotators with one of the following labels: negative, neutral and positive. Each sentence was then curated by a fourth annotator (the 'curator'). The final label is the decision of the curator based on the three labels of the annotators.
#### Who are the annotators?
The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.
## Additional Information
### Licensing Information
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Vadász, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. pp. 431–446.
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Vadász, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022},
pages = {431--446}
}
```
and to:
Socher et al. (2013), Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 1631--1642.
```
@inproceedings{socher-etal-2013-recursive,
title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
author = "Socher, Richard and
Perelygin, Alex and
Wu, Jean and
Chuang, Jason and
Manning, Christopher D. and
Ng, Andrew and
Potts, Christopher",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2013",
address = "Seattle, Washington, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D13-1170",
pages = "1631--1642",
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. |
SetFit/amazon_counterfactual | 2022-02-08T10:15:40.000Z | [
"arxiv:2104.06893",
"region:us"
] | SetFit | The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false). | @misc{oneill2021i,
title={I Wish I Would Have Loved This One, But I Didn't -- A Multilingual Dataset for Counterfactual Detection in Product Reviews},
author={James O'Neill and Polina Rozenshtein and Ryuichi Kiryo and Motoko Kubota and Danushka Bollegala},
year={2021},
eprint={2104.06893},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 16 | # Amazon Multilingual Counterfactual Dataset
The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).
The key features of this dataset are:
* The dataset is multilingual and contains sentences in English, German, and Japanese.
* The labeling was done by professional linguists and high quality was ensured.
* The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists.
Please see the [paper](https://arxiv.org/abs/2104.06893) for the data statistics, detailed description of data collection and annotation.
GitHub repo URL: https://github.com/amazon-research/amazon-multilingual-counterfactual-dataset
## Usage
You can load each of the languages as follows:
```
from datasets import get_dataset_config_names
dataset_id = "SetFit/amazon_counterfactual"
# Returns ['de', 'en', 'en-ext', 'ja']
configs = get_dataset_config_names(dataset_id)
# Load English subset
dset = load_dataset(dataset_id, name="en")
``` |
midas/duc2001 | 2022-01-23T06:13:06.000Z | [
"region:us"
] | midas | \ | @inproceedings{10.5555/1620163.1620205,
author = {Wan, Xiaojun and Xiao, Jianguo},
title = {Single Document Keyphrase Extraction Using Neighborhood Knowledge},
year = {2008},
isbn = {9781577353683},
publisher = {AAAI Press},
booktitle = {Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 2},
pages = {855–860},
numpages = {6},
location = {Chicago, Illinois},
series = {AAAI'08}
} | null | 1 | 16 | ## Dataset Summary
A dataset for benchmarking keyphrase extraction and generation techniques from english news articles. For more details about the dataset please refer the original paper - [https://dl.acm.org/doi/10.5555/1620163.1620205](https://dl.acm.org/doi/10.5555/1620163.1620205)
Original source of the data - []()
## Dataset Structure
### Data Fields
- **id**: unique identifier of the document.
- **document**: Whitespace separated list of words in the document.
- **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
- **extractive_keyphrases**: List of all the present keyphrases.
- **abstractive_keyphrase**: List of all the absent keyphrases.
### Data Splits
|Split| #datapoints |
|--|--|
| Test | 308 |
## Usage
### Full Dataset
```python
from datasets import load_dataset
# get entire dataset
dataset = load_dataset("midas/duc2001", "raw")
# sample from the test split
print("Sample from test dataset split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
**Output**
```bash
Sample from test data split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['Here', ',', 'at', 'a', 'glance', ',', 'are', 'developments', 'today', 'involving', 'the', 'crash', 'of', 'Pan', 'American', 'World', 'Airways', 'Flight', '103', 'Wednesday', 'night', 'in', 'Lockerbie', ',', 'Scotland', ',', 'that', 'killed', 'all', '259', 'people', 'aboard', 'and', 'more', 'than', '20', 'people', 'on', 'the', 'ground', ':']
Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'B', 'I', 'I', 'I', 'I', 'I', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
Extractive/present Keyphrases: ['pan american world airways flight 103', 'crash', 'lockerbie']
Abstractive/absent Keyphrases: ['terrorist threats', 'widespread wreckage', 'radical palestinian faction', 'terrorist bombing', 'bomb threat', 'sabotage']
-----------
```
### Keyphrase Extraction
```python
from datasets import load_dataset
# get the dataset only for keyphrase extraction
dataset = load_dataset("midas/duc2001", "extraction")
print("Samples for Keyphrase Extraction")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("\n-----------\n")
```
### Keyphrase Generation
```python
# get the dataset only for keyphrase generation
dataset = load_dataset("midas/duc2001", "generation")
print("Samples for Keyphrase Generation")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
## Citation Information
```
@inproceedings{10.5555/1620163.1620205,
author = {Wan, Xiaojun and Xiao, Jianguo},
title = {Single Document Keyphrase Extraction Using Neighborhood Knowledge},
year = {2008},
isbn = {9781577353683},
publisher = {AAAI Press},
booktitle = {Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 2},
pages = {855–860},
numpages = {6},
location = {Chicago, Illinois},
series = {AAAI'08}
}
```
## Contributions
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset
|
sentence-transformers/embedding-training-data | 2021-10-17T17:49:20.000Z | [
"region:us"
] | sentence-transformers | null | null | null | 45 | 16 | # Training Data for Text Embedding Models
This repository contains training files to train text embedding models, e.g. using [sentence-transformers](https://www.SBERT.net).
## Data Format
All files are in a `jsonl.gz` format: Each line contains a JSON-object that represent one training example.
The JSON objects can come in different formats:
- **Pairs:** `["text1", "text2"]` - This is a positive pair that should be close in vector space.
- **Triplets:** `["anchor", "positive", "negative"]` - This is a triplet: The `positive` text should be close to the `anchor`, while the `negative` text should be distant to the `anchor`.
- **Sets:** `{"set": ["text1", "text2", ...]}` A set of texts describing the same thing, e.g. different paraphrases of the same question, different captions for the same image. Any combination of the elements is considered as a positive pair.
- **Query-Pairs:** `{"query": "text", "pos": ["text1", "text2", ...]}` A query together with a set of positive texts. Can be formed to a pair `["query", "positive"]` by randomly selecting a text from `pos`.
- **Query-Triplets:** `{"query": "text", "pos": ["text1", "text2", ...], "neg": ["text1", "text2", ...]}` A query together with a set of positive texts and negative texts. Can be formed to a triplet `["query", "positive", "negative"]` by randomly selecting a text from `pos` and `neg`.
## Available Datasets
**Note: I'm currently in the process to upload the files. Please check again next week to get the full list of datasets**
We measure the performance for each training dataset by training the [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model on it with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss), a batch size of 256, for 2000 training steps. The performance is then averaged across 14 sentence embedding benchmark datasets from diverse domains (Reddit, Twitter, News, Publications, E-Mails, ...).
| Dataset | Description | Size (#Lines) | Performance | Reference |
| --- | --- | :---: | :---: | --- |
| [gooaq_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/gooaq_pairs.jsonl.gz) | (Question, Answer)-Pairs from Google auto suggest | 3,012,496 | 59.06 | [GooAQ](https://github.com/allenai/gooaq)
| [yahoo_answers_title_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_title_answer.jsonl.gz) | (Title, Answer) pairs from Yahoo Answers | 1,198,260 | 58.65 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
| [msmarco-triplets.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/msmarco-triplets.jsonl.gz) | (Question, Answer, Negative)-Triplets from MS MARCO Passages dataset | 499,184 | 58.76 | [MS MARCO Passages](https://github.com/microsoft/MSMARCO-Passage-Ranking)
| [stackexchange_duplicate_questions_title_title.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title_title.jsonl.gz) | (Title, Title) pairs of duplicate questions from StackExchange | 304,525 | 58.47 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
| [eli5_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/eli5_question_answer.jsonl.gz) | (Question, Answer)-Pairs from ELI5 dataset | 325,475 | 58.24 | [ELI5](https://huggingface.co/datasets/eli5)
| [yahoo_answers_title_question.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_title_question.jsonl.gz) | (Title, Question_Body) pairs from Yahoo Answers | 659,896 | 58.05 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
| [squad_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/squad_pairs.jsonl.gz) | (Question, Answer_Passage) Pairs from SQuAD dataset | 87,599 | 58.02 | [SQuAD](https://huggingface.co/datasets/squad)
| [yahoo_answers_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_question_answer.jsonl.gz) | (Question_Body, Answer) pairs from Yahoo Answers | 681,164 | 57.74 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
| [wikihow.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/wikihow.jsonl.gz) | (Summary, Text) from WikiHow | 128,542 | 57.67 | [WikiHow](https://github.com/pvl/wikihow_pairs_dataset)
| [amazon_review_2018.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon_review_2018.jsonl.gz) | (Title, review) pairs from Amazon | 87,877,725 | 57.65 | [Amazon review data (2018)](http://deepyeti.ucsd.edu/jianmo/amazon/index.html)
| [NQ-train_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/NQ-train_pairs.jsonl.gz) | Training pairs (query, answer_passage) from the NQ dataset | 100,231 | 57.48 | [Natural Questions](https://ai.google.com/research/NaturalQuestions)
| [amazon-qa.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon-qa.jsonl.gz) | (Question, Answer) pairs from Amazon | 1,095,290 | 57.48 | [AmazonQA](https://github.com/amazonqa/amazonqa)
| [S2ORC_title_abstract.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_title_abstract.jsonl.gz) | (Title, Abstract) pairs of scientific papers | 41,769,185 | 57.39 | [S2ORC](https://github.com/allenai/s2orc)
| [quora_duplicates.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates.jsonl.gz) | Duplicate question pairs from Quora | 103,663 | 57.36 | [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
| [WikiAnswers.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/WikiAnswers.jsonl.gz) | Sets of duplicates questions | 27,383,151 | 57.34 | [WikiAnswers Corpus](https://github.com/afader/oqa#wikianswers-corpus)
| [searchQA_top5_snippets.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/searchQA_top5_snippets.jsonl.gz) | Question + Top5 text snippets from SearchQA dataset. Top5 | 117,220 | 57.34 | [search_qa](https://huggingface.co/datasets/search_qa)
| [stackexchange_duplicate_questions_title-body_title-body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title-body_title-body.jsonl.gz) | (Title+Body, Title+Body) pairs of duplicate questions from StackExchange | 250,460 | 57.30 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
| [S2ORC_citations_titles.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_citations_titles.jsonl.gz) | Citation network (paper titles) | 51,030,086 | 57.28 | [S2ORC](https://github.com/allenai/s2orc)
| [stackexchange_duplicate_questions_body_body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_body_body.jsonl.gz) | (Body, Body) pairs of duplicate questions from StackExchange | 250,519 | 57.26 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
| [agnews.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/agnews.jsonl.gz) | (Title, Description) pairs of news articles from the AG News dataset | 1,157,745 | 57.25 | [AG news corpus](http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html)
| [quora_duplicates_triplets.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates_triplets.jsonl.gz) | Duplicate question pairs from Quora with additional hard negatives (mined & denoised by cross-encoder) | 101,762 | 56.97 | [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
| [AllNLI.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/AllNLI.jsonl.gz) | Combination of SNLI + MultiNLI Triplets: (Anchor, Entailment_Text, Contradiction_Text) | 277,230 | 56.57 | [SNLI](https://huggingface.co/datasets/snli) and [MNLI](https://huggingface.co/datasets/multi_nli)
| [npr.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/npr.jsonl.gz) | (Title, Body) pairs from the npr.org website | 594,384 | 56.44 | [Pushshift](https://files.pushshift.io/news/)
| [specter_train_triples.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz) | Triplets (Title, related_title, hard_negative) for Scientific Publications from Specter | 684,100 | 56.32 | [SPECTER](https://github.com/allenai/specter)
| [SimpleWiki.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/SimpleWiki.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 102,225 | 56.15 | [SimpleWiki](https://cs.pomona.edu/~dkauchak/simplification/)
| [PAQ_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/PAQ_pairs.jsonl.gz) | Training pairs (query, answer_passage) from the PAQ dataset | 64,371,441 | 56.11 | [PAQ](https://github.com/facebookresearch/PAQ)
| [altlex.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 112,696 | 55.95 | [altlex](https://github.com/chridey/altlex/)
| [ccnews_title_text.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/ccnews_title_text.jsonl.gz) | (Title, article) pairs from the CC News dataset | 614,664 | 55.84 | [CC-News](https://huggingface.co/datasets/cc_news)
| [codesearchnet.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/codesearchnet.jsonl.gz) | CodeSearchNet corpus is a dataset of (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. | 1,151,414 | 55.80 | [CodeSearchNet](https://huggingface.co/datasets/code_search_net)
| [S2ORC_citations_abstracts.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_citations_abstracts.jsonl.gz) | Citation network (paper abstracts) | 39,567,485 | 55.74 | [S2ORC](https://github.com/allenai/s2orc)
| [sentence-compression.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/sentence-compression.jsonl.gz) | Pairs (long_text, short_text) about sentence-compression | 180,000 | 55.63 | [Sentence-Compression](https://github.com/google-research-datasets/sentence-compression)
| [TriviaQA_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/TriviaQA_pairs.jsonl.gz) | Pairs (query, answer) from TriviaQA dataset | 73,346 | 55.56 | [TriviaQA](https://huggingface.co/datasets/trivia_qa)
| [cnn_dailymail_splitted.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/cnn_dailymail_splitted.jsonl.gz) | (article, highlight sentence) with individual highlight sentences for each news article | 311,971 | 55.36 | [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail)
| [cnn_dailymail.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/cnn_dailymail.jsonl.gz) | (highlight sentences, article) with all highlight sentences as one text for each news article | 311,971 | 55.27 | [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail)
| [flickr30k_captions.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/flickr30k_captions.jsonl.gz) | Different captions for the same image from the Flickr30k dataset | 31,783 | 54.68 | [Flickr30k](https://shannon.cs.illinois.edu/DenotationGraph/)
| [xsum.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/xsum.jsonl.gz) | (Summary, News Article) pairs from XSUM dataset | 226,711 | 53.86 | [xsum](https://huggingface.co/datasets/xsum)
| [coco_captions.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/coco_captions.jsonl.gz) | Different captions for the same image | 82,783 | 53.77 | [COCO](https://cocodataset.org/)
**Disclaimer:** We only distribute these datasets in a specific format, but we do not vouch for their quality or fairness, or claim that you have license to use the dataset. It remains the user's responsibility to determine whether you as a user have permission to use the dataset under the dataset's license and to cite the right owner of the dataset. Please check the individual dataset webpages for the license agreements.
If you're a dataset owner and wish to update any part of it, or do not want your dataset to be included in this dataset collection, feel free to contact me.
|
svakulenk0/qrecc | 2022-07-02T17:35:21.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|natural_questions",
"source_datasets:extended|quac",
"language:en",
"license:cc-by-3.0",
"arxiv:2010.04898",
... | svakulenk0 | null | null | null | 7 | 16 | ---
pretty_name: QReCC
language_creators:
- expert-generated
- found
language:
- en
license:
- cc-by-3.0
multilinguality:
- monolingual
source_datasets:
- extended|natural_questions
- extended|quac
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for QReCC: Question Rewriting in Conversational Context
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- [**Repository:**](https://github.com/apple/ml-qrecc)
- [**Paper:**](https://arxiv.org/pdf/2010.04898.pdf)
- [**Leaderboard:**](https://www.tira.io/task/scai-qrecc/dataset/scai-qrecc21-test-dataset-2021-07-20)
### Dataset Summary
QReCC (Question Rewriting in Conversational Context) is an end-to-end open-domain question answering dataset comprising of 14K conversations with 81K question-answer pairs. The goal of this dataset is to provide a challenging benchmark for end-to-end conversational question answering that includes the individual subtasks of question rewriting, passage retrieval and reading comprehension.
The task in QReCC is to find answers to conversational questions within a collection of 10M web pages split into 54M passages. Answers to questions in the same conversation may be distributed across several web pages.
The passage collection should be downloaded from [**Zenodo**](https://zenodo.org/record/5115890#.YaeD7C8RppR) (passages.zip)
### Supported Tasks and Leaderboards
`question-answering`
### Languages
English
## Dataset Structure
### Data Instances
An example from the data set looks as follows:
```
{
"Context": [
"What are the pros and cons of electric cars?",
"Some pros are: They're easier on the environment. Electricity is cheaper than gasoline. Maintenance is less frequent and less expensive. They're very quiet. You'll get tax credits. They can shorten your commute time. Some cons are: Most EVs have pretty short ranges. Recharging can take a while."
],
"Question": "Tell me more about Tesla",
"Rewrite": "Tell me more about Tesla the car company.",
"Answer": "Tesla Inc. is an American automotive and energy company based in Palo Alto, California. The company specializes in electric car manufacturing and, through its SolarCity subsidiary, solar panel manufacturing.",
"Answer_URL": "https://en.wikipedia.org/wiki/Tesla,_Inc.",
"Conversation_no": 74,
"Turn_no": 2,
"Conversation_source": "trec"
}
```
### Data Splits
- train: 63501
- test: 16451
## Dataset Creation
### Source Data
- QuAC
- TREC CAsT
- Natural Questions
## Additional Information
### Licensing Information
[CC BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/)
### Citation Information
```
@inproceedings{ qrecc,
title={Open-Domain Question Answering Goes Conversational via Question Rewriting},
author={Anantha, Raviteja and Vakulenko, Svitlana and Tu, Zhucheng and Longpre, Shayne and Pulman, Stephen and Chappidi, Srinivas},
booktitle={ NAACL },
year={2021}
}
``` |
mbartolo/synQA | 2022-10-25T10:02:24.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:1606.05250",
"reg... | mbartolo | SynQA is a Reading Comprehension dataset created in the work "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation" (https://aclanthology.org/2021.emnlp-main.696/).
It consists of 314,811 synthetically generated questions on the passages in the SQuAD v1.1 (https://arxiv.org/abs/1606.05250) training set.
In this work, we use a synthetic adversarial data generation to make QA models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA (https://adversarialqa.github.io/) dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8% of the time on average, compared to 17.6% for a model trained without synthetic data.
For full details on how the dataset was created, kindly refer to the paper. | @inproceedings{bartolo-etal-2021-improving,
title = "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation",
author = "Bartolo, Max and
Thrush, Tristan and
Jia, Robin and
Riedel, Sebastian and
Stenetorp, Pontus and
Kiela, Douwe",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.696",
doi = "10.18653/v1/2021.emnlp-main.696",
pages = "8830--8848",
abstract = "Despite recent progress, state-of-the-art question answering models remain vulnerable to a variety of adversarial attacks. While dynamic adversarial data collection, in which a human annotator tries to write examples that fool a model-in-the-loop, can improve model robustness, this process is expensive which limits the scale of the collected data. In this work, we are the first to use synthetic adversarial data generation to make question answering models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation and show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8{\%} of the time on average, compared to 17.6{\%} for a model trained without synthetic data.",
} | null | 2 | 16 | ---
annotations_creators:
- generated
language_creators:
- found
language:
- en
license: mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
pretty_name: synQA
---
# Dataset Card for synQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [synQA homepage](https://github.com/maxbartolo/improving-qa-model-robustness)
- **Paper:** [Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation](https://aclanthology.org/2021.emnlp-main.696/)
- **Point of Contact:** [Max Bartolo](max.bartolo@ucl.ac.uk)
### Dataset Summary
SynQA is a Reading Comprehension dataset created in the work "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation" (https://aclanthology.org/2021.emnlp-main.696/).
It consists of 314,811 synthetically generated questions on the passages in the SQuAD v1.1 (https://arxiv.org/abs/1606.05250) training set.
In this work, we use a synthetic adversarial data generation to make QA models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA (https://adversarialqa.github.io/) dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8% of the time on average, compared to 17.6% for a model trained without synthetic data.
For full details on how the dataset was created, kindly refer to the paper.
### Supported Tasks
`extractive-qa`: The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap [F1 score](https://huggingface.co/metrics/f1).ilable as round 1 of the QA task on [Dynabench](https://dynabench.org/tasks/2#overall) and ranks models based on F1 score.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
```
{
"data": [
{
"title": "None",
"paragraphs": [
{
"context": "Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.",
"qas": [
{
"id": "689f275aacba6c43ff112b2c7cb16129bfa934fa",
"question": "What material is the statue of Christ made of?",
"answers": [
{
"answer_start": 190,
"text": "organic copper"
}
]
},
{
"id": "73bd3f52f5934e02332787898f6e568d04bc5403",
"question": "Who is on the Main Building's gold dome?",
"answers": [
{
"answer_start": 111,
"text": "the Virgin Mary."
}
]
},
{
"id": "4d459d5b75fd8a6623446290c542f99f1538cf84",
"question": "What kind of statue is at the end of the main drive?",
"answers": [
{
"answer_start": 667,
"text": "modern stone"
}
]
},
{
"id": "987a1e469c5b360f142b0a171e15cef17cd68ea6",
"question": "What type of dome is on the Main Building at Notre Dame?",
"answers": [
{
"answer_start": 79,
"text": "gold"
}
]
}
]
}
]
}
]
}
```
### Data Fields
- title: all "None" in this dataset
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text.
### Data Splits
The dataset is composed of a single split of 314,811 examples that we used in a two-stage fine-tuning process (refer to the paper for further details).
## Dataset Creation
### Curation Rationale
This dataset was created to investigate the effects of using synthetic adversarial data generation to improve robustness of state-of-the-art QA models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250).
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and a BART-Large generative model for the questions.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a support resource for improve the ability of systems t handle questions that contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, selected candidate answers, generated questions, quality re-labelling process, as well as any algorithmic biases that may be exacerbated from the adversarial annotation process used to collect the SQuAD and AdversarialQA data on which the generators were trained.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This dataset was initially created by Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela during work carried out at University College London (UCL) and Facebook AI Research (FAIR).
### Licensing Information
This dataset is distributed under the [MIT License](https://opensource.org/licenses/MIT).
### Citation Information
```
@inproceedings{bartolo-etal-2021-improving,
title = "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation",
author = "Bartolo, Max and
Thrush, Tristan and
Jia, Robin and
Riedel, Sebastian and
Stenetorp, Pontus and
Kiela, Douwe",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.696",
doi = "10.18653/v1/2021.emnlp-main.696",
pages = "8830--8848",
abstract = "Despite recent progress, state-of-the-art question answering models remain vulnerable to a variety of adversarial attacks. While dynamic adversarial data collection, in which a human annotator tries to write examples that fool a model-in-the-loop, can improve model robustness, this process is expensive which limits the scale of the collected data. In this work, we are the first to use synthetic adversarial data generation to make question answering models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation and show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8{\%} of the time on average, compared to 17.6{\%} for a model trained without synthetic data.",
}
```
### Contributions
Thanks to [@maxbartolo](https://github.com/maxbartolo) for adding this dataset.
|
openclimatefix/mrms | 2022-06-22T13:39:35.000Z | [
"doi:10.57967/hf/0885",
"region:us"
] | openclimatefix | This dataset consists of MRMS precipitation radar data for the continental United States,
sampled at a 1kmx1km area and 2-mimntely spatial resolution. | @InProceedings{ocf:mrms,
title = {MRMS Archival Precipitation Rate Radar Dataset},
author={Jacob Bieker
},
year={2022}
} | null | 7 | 16 | annotations_creators:
- machine-generated
language_creators:
- machine-generated
languages: []
licenses:
- mit
multilinguality: []
pretty_name: Mutli-Radar/Multi-System Precipitation Radar
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- time-series-forecasting
- image-classification
- image-segmentation
- other
task_ids:
- univariate-time-series-forecasting
- multi-label-image-classification
- semantic-segmentation
# Dataset Card for MRMS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mrms.nssl.noaa.gov/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Jacob Bieker](mailto:jacob@openclimatefix.org)
### Dataset Summary
Multi-Radar/Multi-System Precipitation Rate Radar data for 2016-2022. This data contains precipitation rate values for the continental United States.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
This dataset was constructed to help recreate the original dataset used for MetNet/MetNet-2 as well as Deep Generative Model of Radar papers. The datasets were not pubicly released, but this dataset should cover the time period used plus more compared to the datasets in the papers.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
US Government License, no restrictions
### Citation Information
@article(ocf:mrms,
author = {Jacob Bieker}
title = {MRMS Precipitation Rate Dataset}
year = {2022}
} |
AlekseyKorshuk/romance-books | 2022-06-10T13:23:16.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 2 | 16 | Entry not found |
MicPie/unpredictable_phonearena-com | 2022-08-04T20:11:00.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | MicPie | The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card. | @misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
} | null | 0 | 16 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-phonearena-com
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-phonearena-com" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
|
sileod/wikimedqa | 2023-05-16T07:47:46.000Z | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] | sileod | Anonymous submission | @article{sileo2023generating,
title={Generating multiple-choice questions for medical question answering with distractors and cue-masking},
author={Sileo, Damien and Uma, Kanimozhi and Moens, Marie-Francine},
journal={arXiv preprint arXiv:2303.07069},
year={2023}
} | null | 5 | 16 | ---
license: apache-2.0
task_categories:
- text-classification
- multiple-choice
language:
- en
tags:
- medical
---
```bib
@article{sileo2023wikimedqa,
title={Generating multiple-choice questions for medical question answering with distractors and cue-masking},
author={Sileo, Damien and Uma, Kanimozhi and Moens, Marie-Francine},
journal={arXiv preprint arXiv:2303.07069 },
year={2023}
}
``` |
tarteel-ai/EA-UD | 2022-07-15T03:04:41.000Z | [
"region:us"
] | tarteel-ai | null | null | null | 0 | 16 | Entry not found |
USC-MOLA-Lab/MFRC | 2022-08-26T00:36:03.000Z | [
"arxiv:2208.05545",
"region:us"
] | USC-MOLA-Lab | null | null | null | 4 | 16 | # Dataset Card for MFRC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Reddit posts annotated for moral foundations
### Supported Tasks and Leaderboards
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- text
- subreddit
- bucket
- annotator
- annotation
- confidence
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
cc-by-4.0
### Citation Information
```bibtex
@misc{trager2022moral,
title={The Moral Foundations Reddit Corpus},
author={Jackson Trager and Alireza S. Ziabari and Aida Mostafazadeh Davani and Preni Golazazian and Farzan Karimi-Malekabadi and Ali Omrani and Zhihe Li and Brendan Kennedy and Nils Karl Reimer and Melissa Reyes and Kelsey Cheng and Mellow Wei and Christina Merrifield and Arta Khosravi and Evans Alvarez and Morteza Dehghani},
year={2022},
eprint={2208.05545},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
|
masakhane/mafand | 2023-09-11T18:01:53.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:fr",
"language:am",
"language:bm",
"lang... | masakhane | MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are:
- Amharic
- Bambara
- Ghomala
- Ewe
- Fon
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Mossi
- Nigerian-Pidgin
- Chichewa
- Shona
- Swahili
- Setswana
- Twi
- Wolof
- Xhosa
- Yoruba
- Zulu
The train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho
For more details see https://aclanthology.org/2022.naacl-main.223/ | @inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
} | null | 6 | 16 | ---
annotations_creators:
- expert-generated
language:
- en
- fr
- am
- bm
- bbj
- ee
- fon
- ha
- ig
- lg
- mos
- ny
- pcm
- rw
- sn
- sw
- tn
- tw
- wo
- xh
- yo
- zu
language_creators:
- expert-generated
license:
- cc-by-nc-4.0
multilinguality:
- translation
- multilingual
pretty_name: mafand
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- news, mafand, masakhane
task_categories:
- translation
task_ids: []
---
# Dataset Card for MAFAND
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/masakhane-io/lafand-mt
- **Repository:** https://github.com/masakhane-io/lafand-mt
- **Paper:** https://aclanthology.org/2022.naacl-main.223/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [David Adelani](https://dadelani.github.io/)
### Dataset Summary
MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages.
### Supported Tasks and Leaderboards
Machine Translation
### Languages
The languages covered are:
- Amharic
- Bambara
- Ghomala
- Ewe
- Fon
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Mossi
- Nigerian-Pidgin
- Chichewa
- Shona
- Swahili
- Setswana
- Twi
- Wolof
- Xhosa
- Yoruba
- Zulu
## Dataset Structure
### Data Instances
```
>>> from datasets import load_dataset
>>> data = load_dataset('masakhane/mafand', 'en-yor')
{"translation": {"src": "President Buhari will determine when to lift lockdown – Minister", "tgt": "Ààrẹ Buhari ló lè yóhùn padà lórí ètò kónílégbélé – Mínísítà"}}
{"translation": {"en": "President Buhari will determine when to lift lockdown – Minister", "yo": "Ààrẹ Buhari ló lè yóhùn padà lórí ètò kónílégbélé – Mínísítà"}}
```
### Data Fields
- "translation": name of the task
- "src" : source language e.g en
- "tgt": target language e.g yo
### Data Splits
Train/dev/test split
language| Train| Dev |Test
-|-|-|-
amh |-|899|1037
bam |3302|1484|1600
bbj |2232|1133|1430
ewe |2026|1414|1563
fon |2637|1227|1579
hau |5865|1300|1500
ibo |6998|1500|1500
kin |-|460|1006
lug |4075|1500|1500
luo |4262|1500|1500
mos |2287|1478|1574
nya |-|483|1004
pcm |4790|1484|1574
sna |-|556|1005
swa |30782|1791|1835
tsn |2100|1340|1835
twi |3337|1284|1500
wol |3360|1506|1500|
xho |-|486|1002|
yor |6644|1544|1558|
zul |3500|1239|998|
## Dataset Creation
### Curation Rationale
MAFAND was created from the news domain, translated from English or French to an African language
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
- [Masakhane](https://github.com/masakhane-io/lafand-mt)
- [Igbo](https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_en_mt)
- [Swahili](https://opus.nlpl.eu/GlobalVoices.php)
- [Hausa](https://www.statmt.org/wmt21/translation-task.html)
- [Yoruba](https://github.com/uds-lsv/menyo-20k_MT)
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
Masakhane members
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[CC-BY-4.0-NC](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
```
@inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
}
``` |
jamescalam/unsplash-25k-photos | 2022-09-13T13:02:46.000Z | [
"task_categories:image-to-image",
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:text-to-image",
"task_categories:zero-shot-image-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n... | jamescalam | This is a dataset that streams photos data from the Unsplash 25K servers. | @InProceedings{huggingface:dataset,
title = {Unsplash Lite Dataset 1.2.0 Photos},
author={Unsplash},
year={2022}
} | null | 24 | 16 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: Unsplash Lite 25K Photos
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- images
- unsplash
- photos
task_categories:
- image-to-image
- image-classification
- image-to-text
- text-to-image
- zero-shot-image-classification
task_ids: []
---
# Unsplash Lite Dataset Photos
This dataset is linked to the Unsplash Lite dataset containing data on 25K images from Unsplash. The dataset here only includes data from a single file `photos.tsv000`. The dataset builder script streams this data directly from the Unsplash 25K dataset source.
For full details, please see the [Unsplash Dataset GitHub repo](https://github.com/unsplash/datasets), or read the preview (copied from the repo) below.
---
# The Unsplash Dataset

The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning.
The Unsplash Dataset is offered in two datasets:
- the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches
- the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches
As the Unsplash library continues to grow, we’ll release updates to the dataset with new fields and new images, with each subsequent release being [semantically versioned](https://semver.org/).
We welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can [open an issue](https://github.com/unsplash/datasets/issues/new/choose) to report a problem or to let us know what you would like to see in the next release of the datasets.
For more on the Unsplash Dataset, see [our announcement](https://unsplash.com/blog/the-unsplash-dataset/) and [site](https://unsplash.com/data).
## Download
### Lite Dataset
The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md).
[⬇️ Download the Lite dataset](https://unsplash.com/data/lite/latest) [~650MB compressed, ~1.4GB raw]
### Full Dataset
The Full dataset is available for non-commercial usage and all uses must abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md). To access, please go to [unsplash.com/data](https://unsplash.com/data) and request access. The dataset weighs ~20 GB compressed (~43GB raw)).
## Documentation
See the [documentation for a complete list of tables and fields](https://github.com/unsplash/datasets/blob/master/DOCS.md).
## Usage
You can follow these examples to load the dataset in these common formats:
- [Load the dataset in a PostgreSQL database](https://github.com/unsplash/datasets/tree/master/how-to/psql)
- [Load the dataset in a Python environment](https://github.com/unsplash/datasets/tree/master/how-to/python)
- [Submit an example doc](https://github.com/unsplash/datasets/blob/master/how-to/README.md#submit-an-example)
## Share your work
We're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.
We'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at [data@unsplash.com](mailto:data@unsplash.com).
If you're using the dataset in a research paper, you can attribute the dataset as `Unsplash Lite Dataset 1.2.0` or `Unsplash Full Dataset 1.2.0` and link to the permalink [`unsplash.com/data`](https://unsplash.com/data).
----
The Unsplash Dataset is made available for research purposes. [It cannot be used to redistribute the images contained within](https://github.com/unsplash/datasets/blob/master/TERMS.md). To use the Unsplash library in a product, see [the Unsplash API](https://unsplash.com/developers).
 |
D3xter1922/proofwriter-dataset | 2022-10-04T12:26:37.000Z | [
"region:us"
] | D3xter1922 | null | null | null | 1 | 16 | Entry not found |
laion/laion2B-multi-joined-translated-to-en | 2022-10-11T20:33:48.000Z | [
"license:cc-by-4.0",
"region:us"
] | laion | null | null | null | 1 | 16 | ---
license: cc-by-4.0
---
|
arbml/OSACT4_hatespeech | 2022-10-14T19:48:40.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 16 | ---
dataset_info:
features:
- name: tweet
dtype: string
- name: offensive
dtype: string
- name: hate
dtype: string
splits:
- name: train
num_bytes: 1417732
num_examples: 6838
- name: validation
num_bytes: 204725
num_examples: 999
download_size: 802812
dataset_size: 1622457
---
# Dataset Card for "OSACT4_hatespeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SALT-NLP/FLUE-FiQA | 2022-10-21T17:29:14.000Z | [
"license:cc-by-3.0",
"region:us"
] | SALT-NLP | null | null | null | 2 | 16 | ---
license: cc-by-3.0
---
## Dataset Summary
- **Homepage:** https://sites.google.com/view/salt-nlp-flang
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://huggingface.co/datasets/SALT-NLP/FLUE-NER)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Dataset Structure
The FiQA dataset has a corpus, queries and qrels (relevance judgments file). They are in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
|
declare-lab/MELD | 2022-10-24T04:48:06.000Z | [
"license:gpl-3.0",
"region:us"
] | declare-lab | null | null | null | 3 | 16 | ---
license: gpl-3.0
---
|
bigbio/genia_relation_corpus | 2022-12-22T15:44:40.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The extraction of various relations stated to hold between biomolecular entities is one of the most frequently
addressed information extraction tasks in domain studies. Typical relation extraction targets involve protein-protein
interactions or gene regulatory relations. However, in the GENIA corpus, such associations involving change in the
state or properties of biomolecules are captured in the event annotation.
The GENIA corpus relation annotation aims to complement the event annotation of the corpus by capturing (primarily)
static relations, relations such as part-of that hold between entities without (necessarily) involving change. | @inproceedings{pyysalo-etal-2009-static,
title = "Static Relations: a Piece in the Biomedical Information Extraction Puzzle",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} 2009 Workshop",
month = jun,
year = "2009",
address = "Boulder, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W09-1301",
pages = "1--9",
}
@article{article,
author = {Ohta, Tomoko and Pyysalo, Sampo and Kim, Jin-Dong and Tsujii, Jun'ichi},
year = {2010},
month = {10},
pages = {917-28},
title = {A reevaluation of biomedical named entity - term relations},
volume = {8},
journal = {Journal of bioinformatics and computational biology},
doi = {10.1142/S0219720010005014}
}
@MISC{Hoehndorf_applyingontology,
author = {Robert Hoehndorf and Axel-cyrille Ngonga Ngomo and Sampo Pyysalo and Tomoko Ohta and Anika Oellrich and
Dietrich Rebholz-schuhmann},
title = {Applying ontology design patterns to the implementation of relations in GENIA},
year = {}
} | null | 0 | 16 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: GENIA Relation Corpus
homepage: http://www.geniaproject.org/genia-corpus/relation-corpus
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- RELATION_EXTRACTION
---
# Dataset Card for GENIA Relation Corpus
## Dataset Description
- **Homepage:** http://www.geniaproject.org/genia-corpus/relation-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE
The extraction of various relations stated to hold between biomolecular entities is one of the most frequently
addressed information extraction tasks in domain studies. Typical relation extraction targets involve protein-protein
interactions or gene regulatory relations. However, in the GENIA corpus, such associations involving change in the
state or properties of biomolecules are captured in the event annotation.
The GENIA corpus relation annotation aims to complement the event annotation of the corpus by capturing (primarily)
static relations, relations such as part-of that hold between entities without (necessarily) involving change.
## Citation Information
```
@inproceedings{pyysalo-etal-2009-static,
title = "Static Relations: a Piece in the Biomedical Information Extraction Puzzle",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} 2009 Workshop",
month = jun,
year = "2009",
address = "Boulder, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W09-1301",
pages = "1--9",
}
@article{article,
author = {Ohta, Tomoko and Pyysalo, Sampo and Kim, Jin-Dong and Tsujii, Jun'ichi},
year = {2010},
month = {10},
pages = {917-28},
title = {A reevaluation of biomedical named entity - term relations},
volume = {8},
journal = {Journal of bioinformatics and computational biology},
doi = {10.1142/S0219720010005014}
}
@MISC{Hoehndorf_applyingontology,
author = {Robert Hoehndorf and Axel-cyrille Ngonga Ngomo and Sampo Pyysalo and Tomoko Ohta and Anika Oellrich and
Dietrich Rebholz-schuhmann},
title = {Applying ontology design patterns to the implementation of relations in GENIA},
year = {}
}
```
|
osanseviero/twitter-airline-sentiment | 2022-11-16T22:31:48.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | osanseviero | null | null | null | 0 | 16 | ---
license:
- cc-by-nc-sa-4.0
converted_from: kaggle
kaggle_id: crowdflower/twitter-airline-sentiment
---
# Dataset Card for Twitter US Airline Sentiment
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/crowdflower/twitter-airline-sentiment
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
*This data originally came from [Crowdflower's Data for Everyone library](http://www.crowdflower.com/data-for-everyone).*
As the original source says,
> A sentiment analysis job about the problems of each major U.S. airline. Twitter data was scraped from February of 2015 and contributors were asked to first classify positive, negative, and neutral tweets, followed by categorizing negative reasons (such as "late flight" or "rude service").
The data we're providing on Kaggle is a slightly reformatted version of the original source. It includes both a CSV file and SQLite database. The code that does these transformations is [available on GitHub](https://github.com/benhamner/crowdflower-airline-twitter-sentiment)
For example, it contains whether the sentiment of the tweets in this set was positive, neutral, or negative for six US airlines:
[](https://www.kaggle.com/benhamner/d/crowdflower/twitter-airline-sentiment/exploring-airline-twitter-sentiment-data)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@crowdflower](https://kaggle.com/crowdflower)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
Numerati/numerai-datasets | 2022-12-11T13:11:50.000Z | [
"task_categories:time-series-forecasting",
"task_categories:tabular-classification",
"task_categories:other",
"task_ids:multivariate-time-series-forecasting",
"task_ids:tabular-single-column-regression",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:othe... | Numerati | null | null | null | 2 | 16 | ---
annotations_creators:
- no-annotation
language: []
language_creators:
- machine-generated
license:
- unknown
multilinguality:
- other-my-multilinguality
pretty_name: Numerai Dataset
size_categories: []
source_datasets:
- original
tags:
- numerai
- stock market
- hedge fund
- obfuscated
task_categories:
- time-series-forecasting
- tabular-classification
- other
task_ids:
- multivariate-time-series-forecasting
- tabular-single-column-regression
---
# Numerai Datasets
This is a mirror of the official numerai dataset - NOT OFFICIALLY SUPPORTED OR MAINTAINED BY NUMERAI.
Official source: https://numer.ai/data
Use the official source to submit your predictions, no guarantees for correctness or completeness.
This is maintained by the Numerai community. |
Anthropic/model-written-evals | 2022-12-21T02:33:18.000Z | [
"task_categories:multiple-choice",
"task_categories:zero-shot-classification",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:multiple-choice-coreference-resolution",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monol... | Anthropic | null | null | null | 29 | 16 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Evaluations from "Discovering Language Model Behaviors with Model-Written
Evaluations"
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- gender bias
- social bias
- AI safety
- personality
- politics
task_categories:
- multiple-choice
- zero-shot-classification
- question-answering
task_ids:
- multiple-choice-qa
- multiple-choice-coreference-resolution
---
# Model-Written Evaluation Datasets
This repository includes datasets written by language models, used in our paper on "Discovering Language Model Behaviors with Model-Written Evaluations."
We intend the datasets to be useful to:
1. Those who are interested in understanding the quality and properties of model-generated data
2. Those who wish to use our datasets to evaluate other models for the behaviors we examined in our work (e.g., related to model persona, sycophancy, advanced AI risks, and gender bias)
The evaluations were generated to be asked to dialogue agents (e.g., a model finetuned explicitly respond to a user's utterances, or a pretrained language model prompted to behave like a dialogue agent). However, it is possible to adapt the data to test other kinds of models as well.
We describe each of our collections of datasets below:
1. `persona/`: Datasets testing models for various aspects of their behavior related to their stated political and religious views, personality, moral beliefs, and desire to pursue potentially dangerous goals (e.g., self-preservation or power-seeking).
2. `sycophancy/`: Datasets testing models for whether or not they repeat back a user's view to various questions (in philosophy, NLP research, and politics)
3. `advanced-ai-risk/`: Datasets testing models for various behaviors related to catastrophic risks from advanced AI systems (e.g., ). These datasets were generated in a few-shot manner. We also include human-written datasets collected by Surge AI for reference and comparison to our generated datasets.
4. `winogenerated/`: Our larger, model-generated version of the Winogender Dataset ([Rudinger et al., 2018](https://arxiv.org/abs/1804.09301)). We also include the names of occupation titles that we generated, to create the dataset (alongside occupation gender statistics from the Bureau of Labor Statistics)
Please see our paper for additional details on the datasets, how we generated them, human validation metrics, and other analyses of the datasets.
**Disclaimer**: As discussed in our paper, some data contains content that includes social biases and stereotypes. The data may also contain other forms of harmful or offensive content. The views expressed in the data do not reflect the views of Anthropic or any of its employees.
## Contact
For questions, please email `ethan at anthropic dot com`
## Bibtex Citation
If you would like to cite our work or data, you may use the following bibtex citation:
```
@misc{perez2022discovering,
doi = {10.48550/ARXIV.2212.09251},
url = {https://arxiv.org/abs/2212.09251},
author = {Perez, Ethan and Ringer, Sam and Lukošiūtė, Kamilė and Nguyen, Karina and Chen, Edwin and Heiner, Scott and Pettit, Craig and Olsson, Catherine and Kundu, Sandipan and Kadavath, Saurav and Jones, Andy and Chen, Anna and Mann, Ben and Israel, Brian and Seethor, Bryan and McKinnon, Cameron and Olah, Christopher and Yan, Da and Amodei, Daniela and Amodei, Dario and Drain, Dawn and Li, Dustin and Tran-Johnson, Eli and Khundadze, Guro and Kernion, Jackson and Landis, James and Kerr, Jamie and Mueller, Jared and Hyun, Jeeyoon and Landau, Joshua and Ndousse, Kamal and Goldberg, Landon and Lovitt, Liane and Lucas, Martin and Sellitto, Michael and Zhang, Miranda and Kingsland, Neerav and Elhage, Nelson and Joseph, Nicholas and Mercado, Noemí and DasSarma, Nova and Rausch, Oliver and Larson, Robin and McCandlish, Sam and Johnston, Scott and Kravec, Shauna and {El Showk}, Sheer and Lanham, Tamera and Telleen-Lawton, Timothy and Brown, Tom and Henighan, Tom and Hume, Tristan and Bai, Yuntao and Hatfield-Dodds, Zac and Clark, Jack and Bowman, Samuel R. and Askell, Amanda and Grosse, Roger and Hernandez, Danny and Ganguli, Deep and Hubinger, Evan and Schiefer, Nicholas and Kaplan, Jared},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Discovering Language Model Behaviors with Model-Written Evaluations},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
jamescalam/ml-qa | 2023-01-04T12:26:06.000Z | [
"region:us"
] | jamescalam | null | null | null | 0 | 16 | Entry not found |
archanatikayatray/aeroBERT-NER | 2023-05-20T22:40:58.000Z | [
"task_categories:token-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"NER",
"Aerospace",
"ORG",
"SYS",
"DATETIME",
"RESOURCE",
"VALUE",
"doi:10.57967/hf/0470",
"region:us"
] | archanatikayatray | null | null | null | 2 | 16 | ---
license: apache-2.0
task_categories:
- token-classification
language:
- en
tags:
- NER
- Aerospace
- ORG
- SYS
- DATETIME
- RESOURCE
- VALUE
pretty_name: all_text_annotation_NER.txt
size_categories:
- n<1K
---
# Dataset Card for aeroBERT-NER
## Dataset Description
- **Paper:** aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT
- **Point of Contact:** archanatikayatray@gmail.com
### Dataset Summary
This dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme.
There are a total of 1432 sentences. The creation of this dataset is aimed at - <br>
(1) Making available an **open-source** dataset for aerospace requirements which are often proprietary <br>
(2) Fine-tuning language models for **token identification** (NER) specific to the aerospace domain <br>
This dataset can be used for training or fine-tuning language models for the identification of mentioned Named-Entities in aerospace texts.
## Dataset Structure
The dataset is of the format: ``Sentence-Number * WordPiece-Token * NER-tag`` <br>
"*" is used as a delimiter to avoid confusion with commas (",") that occur in the text. The following example shows the dataset structure for Sentence #1431. <br>
1431\*the\*O <br>
1431\*airplane\*B-SYS <br>
1431\*takeoff\*O <br>
1431\*performance\*O <br>
1431\*must\*O <br>
1431\*be\*O <br>
1431\*determined\*O <br>
1431\*for\*O <br>
1431\*climb\*O <br>
1431\*gradients\*O <br>
1431\*.\*O <br>
## Dataset Creation
### Source Data
Two types of aerospace texts are used to create the aerospace corpus for fine-tuning BERT: <br>
(1) general aerospace texts such as publications by the National Academy of Space Studies Board, and <br>
(2) certification requirements from Title 14 CFR. A total of 1432 sentences from the aerospace domain were included in the corpus. <br>
### Importing dataset into Python environment
Use the following code chunk to import the dataset into Python environment as a DataFrame.
```
from datasets import load_dataset
import pandas as pd
dataset = load_dataset("archanatikayatray/aeroBERT-NER")
#Converting the dataset into a pandas DataFrame
dataset = pd.DataFrame(dataset["train"]["text"])
dataset = dataset[0].str.split('*', expand = True)
#Getting the headers from the first row
header = dataset.iloc[0]
#Excluding the first row since it contains the headers
dataset = dataset[1:]
#Assigning the header to the DataFrame
dataset.columns = header
#Viewing the last 10 rows of the annotated dataset
dataset.tail(10)
```
### Annotations
#### Annotation process
A Subject Matter Expert (SME) was consulted for deciding on the annotation categories. The BIO Tagging scheme was used for annotating the dataset.
**B** - Beginning of entity <br>
**I** - Inside an entity <br>
**O** - Outside an entity <br>
| Category | NER Tags | Example |
| :----: | :----: | :----: |
| System | B-SYS, I-SYS | exhaust heat exchangers, powerplant, auxiliary power unit |
| Value | B-VAL, I-VAL | 1.2 percent, 400 feet, 10 to 19 passengers |
| Date time | B-DATETIME, I-DATETIME | 2013, 2019, May 11,1991 |
| Organization | B-ORG, I-ORG | DOD, Ames Research Center, NOAA |
| Resource | B-RES, I-RES | Section 25-341, Sections 25-173 through 25-177, Part 23 subpart B |
The distribution of the various entities in the corpus is shown below - <br>
|NER Tag|Description|Count|
| :----: | :----: | :----: |
O | Tokens that are not identified as any NE | 37686 |
B-SYS | Beginning of a system NE | 1915 |
I-SYS | Inside a system NE | 1104 |
B-VAL | Beginning of a value NE | 659 |
I-VAL | Inside a value NE | 507 |
B-DATETIME| Beginning of a date time NE | 147 |
I-DATETIME | Inside a date time NE | 63 |
B-ORG | Beginning of an organization NE | 302 |
I-ORG | Inside a organization NE | 227 |
B-RES | Beginning of a resource NE |390 |
I-RES | Inside a resource NE | 1033 |
### Limitations
(1)The dataset is an imbalanced dataset, given that's how language is (not every word is a Named-Entity). Hence, using ``Accuracy`` as a metric for the model performance is
NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.
(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment.
Please refer to the Appendix of the paper for information on the test set.
### Citation Information
```
@Article{aeroBERT-NER,
AUTHOR = {Tikayat Ray, Archana and Pinon Fischer, Olivia J. and Mavris, Dimitri N. and White, Ryan T. and Cole, Bjorn F.},
TITLE = {aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT},
JOURNAL = {AIAA SCITECH 2023 Forum},
YEAR = {2023},
URL = {https://arc.aiaa.org/doi/10.2514/6.2023-2583},
DOI = {10.2514/6.2023-2583}
}
@phdthesis{tikayatray_thesis,
author = {Tikayat Ray, Archana},
title = {Standardization of Engineering Requirements Using Large Language Models},
school = {Georgia Institute of Technology},
year = {2023},
doi = {10.13140/RG.2.2.17792.40961},
URL = {https://repository.gatech.edu/items/964c73e3-f0a8-487d-a3fa-a0988c840d04}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.