id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
Ryukijano/eurosat | 2023-07-19T12:23:14.000Z | [
"region:us"
] | Ryukijano | null | null | null | 0 | 88 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AnnualCrop
'1': Forest
'2': HerbaceousVegetation
'3': Highway
'4': Industrial
'5': Pasture
'6': PermanentCrop
'7': Residential
'8': River
'9': SeaLake
splits:
- name: train
num_bytes: 88397609.0
num_examples: 27000
download_size: 0
dataset_size: 88397609.0
---
# Dataset Card for "eurosat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
codefuse-ai/Evol-instruction-66k | 2023-09-10T02:45:52.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | codefuse-ai | null | null | null | 45 | 88 | ---
license: cc-by-nc-sa-4.0
viewer: false
---
# Dataset Card for CodeFuse-Evol-instruction-66k
<div align='center'>

[[中文]](#chinese) [[English]](#english)
</div>
<a id="english"></a>
## Dataset Description
Evol-instruction-66k data is based on the method mentioned in the paper "WizardCoder: Empowering Code Large Language Models with Evol-Instruct". It enhances the fine-tuning effect of pre-trained code large models by adding complex code instructions.
This data is processed based on an open-source dataset, which can be found at [Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1). The processing includes low-quality filtering, filtering similar data through HumanEval evaluation, etc. After filtering from the original 80k data, we obtain 66k high-quality training and fine-tuning data.
## Creation
<strong>Filtering low-quality data</strong><br>
(1) Filter data with instruction length less than 10 words or greater than 1000 words;<br>
(2) Filter data with output length less than 50 words or greater than 2000 words;<br>
(3) Filter data with no markdown format or multiple markdown formats in the output;<br>
(4) Filter data with less than 3 lines or more than 100 lines of code in the markdown format;<br>
(5) Filter data with a description preceding the markdown format containing more than 200 words.<br>
<strong>Filtering data similar to humaneval</strong><br>
(1) Filter data containing any function name from humaneval;<br>
(2) Using NLTK to remove stop words and punctuation from the docstring of humaneval, obtain the core words such as "sort array prime", etc. Filter data containing more than 40% of the core words from humaneval.<br>
## Structure
````
Instruction:
Execute a multistage computational task consisting of organizing the provided list of numerals in an ascending sequence and then determining the middle, or median, figure from the resultant ordered sequence.
data = [12, 30, 4, 18, 24]
Output:
Here's a Python solution:
```python
data = [12, 30, 4, 18, 24]
# organizing the list in ascending sequence
data.sort()
# Finding the middle figure from the sorted sequence
n = len(data)
if n % 2 == 0:
median = (data[n//2 - 1] + data[n//2]) / 2
else:
median = data[n//2]
print(f\"Sorted Data: {data}\")
print(f\"Median: {median}\")
```
First, we sort the list in ascending order using the `sort()` function. Then, we find the median. If the length of the list is even, the median is the average of the two middle numbers. If the length is odd, the median is the middle number. In this specific case, because the list length is odd (5), the middle value (i.e., the median) is the 3rd value in the sorted list.
````
<a id="chinese"></a>
## 简介
Evol-instruction-66k数据是根据论文《WizardCoder: Empowering Code Large Language
Models with Evol-Instruct》中提到的方法,通过添加复杂的代码指令来增强预训练代码大模型的微调效果。
该数据是在开源数据集[Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1)基础上对数据进行了一系列处理,包括低质量过滤、HumanEval评测相似数据过滤等,从原始80k数据筛选后得到66k高质量训练微调数据。
## 数据生产过程
<strong>过滤低质量数据</strong><br>
(1) 过滤instruction长度小于10个单词或者大于1000个单词的数据;<br>
(2) 过滤output长度小于50个单词或者大于2000个单词的数据;<br>
(3) 过滤output中无markdown格式或者有多个markdown的数据;<br>
(4) 过滤markdown格式中代码少于3行或者大于100行的数据;<br>
(5) 过滤markdown格式前面描述单词大于200个单词的数据。<br>
<strong>过滤与humaneval相似的数据</strong><br>
(1) 过滤包含humaneval中任意函数名的数据;<br>
(2) 采用NLTK去除humaneval的docstring中停用词、标点符号后,得到核心词,比如“sort array prime”等,<br>
过滤包含了humaneval超过40%核心词的数据。<br>
## 数据结构
````
Instruction:
Execute a multistage computational task consisting of organizing the provided list of numerals in an ascending sequence and then determining the middle, or median, figure from the resultant ordered sequence.
data = [12, 30, 4, 18, 24]
Output:
Here's a Python solution:
```python
data = [12, 30, 4, 18, 24]
# organizing the list in ascending sequence
data.sort()
# Finding the middle figure from the sorted sequence
n = len(data)
if n % 2 == 0:
median = (data[n//2 - 1] + data[n//2]) / 2
else:
median = data[n//2]
print(f\"Sorted Data: {data}\")
print(f\"Median: {median}\")
```
First, we sort the list in ascending order using the `sort()` function. Then, we find the median. If the length of the list is even, the median is the average of the two middle numbers. If the length is odd, the median is the middle number. In this specific case, because the list length is odd (5), the middle value (i.e., the median) is the 3rd value in the sorted list.
```` |
euclaise/stage1 | 2023-09-26T17:49:46.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | euclaise | null | null | null | 0 | 88 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: source
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 827191765.0
num_examples: 49017
download_size: 477329322
dataset_size: 827191765.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "stage1"
Combination of goodwiki and rosettacode-parsed |
lchakkei/OpenOrca-Traditional-Chinese | 2023-10-10T15:37:20.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:zh",
"license:mit",
"arxiv:2301.13688",
"region:us"
] | lchakkei | null | null | null | 1 | 88 | ---
language:
- zh
license: mit
size_categories:
- 10M<n<100M
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: OpenOrca-Chinese
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 6478961361
num_examples: 4233915
download_size: 4105074882
dataset_size: 6478961361
---
<p><h1>🐋 OpenOrca-Chinese 数据集!🐋</h1></p>
感謝 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 資料集的發布,為廣大NLP研究人員和開發者帶來了寶貴的資源!
這是一個對 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 資料集中文翻譯的版本,翻譯引擎為 Google 翻譯,希望能為中文 LLM 研究做出一點點貢獻。
<br/>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
|
boardsec/yara_dataset_v4 | 2023-09-17T01:52:10.000Z | [
"region:us"
] | boardsec | null | null | null | 0 | 88 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Chunk
dtype: string
- name: yara_rule
dtype: string
- name: cleaned_yara_rule
dtype: string
splits:
- name: train
num_bytes: 36039
num_examples: 67
download_size: 15832
dataset_size: 36039
---
# Dataset Card for "yara_dataset_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
qgyd2021/cppe-5 | 2023-09-28T08:30:29.000Z | [
"task_categories:object-detection",
"size_categories:100M<n<1B",
"license:apache-2.0",
"object detection",
"region:us"
] | qgyd2021 | CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal
to allow the study of subordinate categorization of medical personal protective equipments,
which is not possible with other popular data sets that focus on broad level categories. | @misc{dagli2021cppe5,
title={CPPE-5: Medical Personal Protective Equipment Dataset},
author={Rishit Dagli and Ali Mustufa Shaikh},
year={2021},
eprint={2112.09569},
archivePrefix={arXiv},
primaryClass={cs.CV}
} | null | 0 | 88 | ---
license: apache-2.0
task_categories:
- object-detection
tags:
- object detection
size_categories:
- 100M<n<1B
---
## cppe-5
我正在 transformers 上练习 [object-detection](https://huggingface.co/docs/transformers/tasks/object_detection)
我在 Kaggle 上执行代码,因为那上面提供免费的GPU, 可是它访问不到 google drive,因此我复制了这个数据集[cppe-5](https://huggingface.co/datasets/cppe-5)。
类别标签:
```text
["Coverall", "Face_Shield", "Gloves", "Goggles", "Mask"]
```
|
sehyun66/Finnhub-News | 2023-10-02T16:51:29.000Z | [
"finance",
"region:us"
] | sehyun66 | null | null | null | 2 | 88 | ---
dataset_info:
- config_name: clean
features:
- name: datetime
dtype: int64
- name: image
dtype: string
- name: related
dtype: string
- name: source
dtype: string
- name: summary
dtype: string
- name: url
dtype: string
- name: id
dtype: int64
- name: category
dtype: string
- name: headline
dtype: string
splits:
- name: clean
num_bytes: 150902085
num_examples: 316086
download_size: 78262136
dataset_size: 150902085
- config_name: default
features:
- name: related
dtype: string
- name: datetime
dtype: int64
- name: image
dtype: string
- name: url
dtype: string
- name: headline
dtype: string
- name: finbert_sentiment
struct:
- name: negative
dtype: float64
- name: neutral
dtype: float64
- name: postive
dtype: float64
- name: source
dtype: string
- name: summary
dtype: string
- name: id
dtype: int64
- name: category
dtype: string
splits:
- name: train
num_bytes: 251731744
num_examples: 515851
download_size: 113022298
dataset_size: 251731744
configs:
- config_name: clean
data_files:
- split: clean
path: clean/clean-*
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- finance
---
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nicolas-BZRD/DOLE_opendata | 2023-09-29T14:52:42.000Z | [
"size_categories:1K<n<10K",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 88 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 86993000
num_examples: 4120
download_size: 36263044
dataset_size: 86993000
license: odc-by
language:
- fr
tags:
- legal
size_categories:
- 1K<n<10K
---
# DOLE
This [section](https://echanges.dila.gouv.fr/OPENDATA/DOLE/) covers laws published since the start of the twelfth parliamentary term (June 2002), ordinances published since 2002, and laws in preparation (drafts and proposals).
The legislative files provide information before and after the enactment of legislation.
Legislative files concern laws covered by article 39 of the Constitution. A legislative file is also opened when a parliamentary assembly decides not to examine texts covered by Article 53 of the Constitution in simplified form.
Since the entry into force of the 2008 constitutional reform, legislative files on bills are only opened after the text has been adopted by the first assembly to which it is referred. |
kewu93/pixel_500 | 2023-10-06T09:31:47.000Z | [
"region:us"
] | kewu93 | null | null | null | 0 | 88 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 5863021.833333333
num_examples: 500
- name: val
num_bytes: 1168940.1666666667
num_examples: 100
download_size: 6125119
dataset_size: 7031962.0
---
# Dataset Card for "pixel_500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vishnupriyavr/wiki-movie-plots-with-summaries-faiss-embeddings | 2023-10-08T16:02:50.000Z | [
"region:us"
] | vishnupriyavr | null | null | null | 0 | 88 | ---
dataset_info:
features:
- name: Release Year
dtype: int64
- name: Title
dtype: string
- name: Cast
dtype: string
- name: Wiki Page
dtype: string
- name: Plot
dtype: string
- name: plot_length
dtype: int64
- name: text
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 256974740
num_examples: 33155
download_size: 216835238
dataset_size: 256974740
---
# Dataset Card for "wiki-movie-plots-with-summaries-faiss-embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SetFit/amazon_reviews_multi_en | 2022-04-13T19:06:11.000Z | [
"license:apache-2.0",
"region:us"
] | SetFit | null | null | null | 1 | 87 | ---
license: apache-2.0
---
|
generalization/newsgroups_Full-p_1 | 2022-09-09T05:18:22.000Z | [
"region:us"
] | generalization | null | null | null | 0 | 87 | Entry not found |
beki/privy | 2023-04-25T21:45:06.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:100K<n<200K",
"size_categories:300K<n<400K",
"language:en",
"license:mit",
"pii-detection",
"region:us"
] | beki | null | null | null | 8 | 87 | ---
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<200K
- 300K<n<400K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
tags:
- pii-detection
train-eval-index:
- config: privy-small
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
metrics:
- type: seqeval
name: seqeval
pretty_name: Privy English
---
# Dataset Card for "privy-english"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy)
### Dataset Summary
A synthetic PII dataset generated using [Privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy), a tool which parses OpenAPI specifications and generates synthetic request payloads, searching for keywords in API schema definitions to select appropriate data providers. Generated API payloads are converted to various protocol trace formats like JSON and SQL to approximate the data developers might encounter while debugging applications.
This labelled PII dataset consists of protocol traces (JSON, SQL (PostgreSQL, MySQL), HTML, and XML) generated from OpenAPI specifications and includes 60+ PII types.
### Supported Tasks and Leaderboards
Named Entity Recognition (NER) and PII classification.
### Label Scheme
<details>
<summary>View label scheme (26 labels for 60 PII data providers)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `PERSON`, `LOCATION`, `NRP`, `DATE_TIME`, `CREDIT_CARD`, `URL`, `IBAN_CODE`, `US_BANK_NUMBER`, `PHONE_NUMBER`, `US_SSN`, `US_PASSPORT`, `US_DRIVER_LICENSE`, `IP_ADDRESS`, `US_ITIN`, `EMAIL_ADDRESS`, `ORGANIZATION`, `TITLE`, `COORDINATE`, `IMEI`, `PASSWORD`, `LICENSE_PLATE`, `CURRENCY`, `ROUTING_NUMBER`, `SWIFT_CODE`, `MAC_ADDRESS`, `AGE` |
</details>
### Languages
English
## Dataset Structure
### Data Instances
A sample:
```
{
"full_text": "{\"full_name_female\": \"Bethany Williams\", \"NewServerCertificateName\": \"\", \"NewPath\": \"\", \"ServerCertificateName\": \"dCwMNqR\", \"Action\": \"\", \"Version\": \"u zNS zNS\"}",
"masked": "{\"full_name_female\": \"{{name_female}}\", \"NewServerCertificateName\": \"{{string}}\", \"NewPath\": \"{{string}}\", \"ServerCertificateName\": \"{{string}}\", \"Action\": \"{{string}}\", \"Version\": \"{{string}}\"}",
"spans": [
{
"entity_type": "PERSON",
"entity_value": "Bethany Williams",
"start_position": 22,
"end_position": 38
}
],
"template_id": 51889,
"metadata": null
}
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@online{WinNT,
author = {Benjamin Kilimnik},
title = {{Privy} Synthetic PII Protocol Trace Dataset},
year = 2022,
url = {https://huggingface.co/datasets/beki/privy},
}
```
### Contributions
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lewtun/music_genres | 2022-11-02T10:27:30.000Z | [
"region:us"
] | lewtun | null | null | null | 0 | 87 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: song_id
dtype: int64
- name: genre_id
dtype: int64
- name: genre
dtype: string
splits:
- name: test
num_bytes: 1978321742.996
num_examples: 5076
- name: train
num_bytes: 7844298868.902
num_examples: 19909
download_size: 9793244255
dataset_size: 9822620611.898
---
# Dataset Card for "music_genres"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
taesiri/imagenet-hard | 2023-06-16T18:50:51.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"OOD",
"ImageNet",
"Out Of Distribution",
"arxiv:2304.05538",
"region:us"
] | taesiri | null | null | null | 7 | 87 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
sequence: int64
- name: origin
dtype: string
- name: english_label
sequence: string
splits:
- name: validation
num_bytes: 1771418938.94
num_examples: 10980
download_size: 6380094503
dataset_size: 1771418938.94
license: mit
task_categories:
- image-classification
language:
- en
tags:
- OOD
- ImageNet
- Out Of Distribution
pretty_name: ImageNet-Hard
size_categories:
- 10K<n<100K
---
# Dataset Card for "ImageNet-Hard"
[Project Page](https://taesiri.github.io/ZoomIsAllYouNeed/) - [ArXiv](https://arxiv.org/abs/2304.05538) - [Paper](https://huggingface.co/papers/2304.05538) - [Github](https://github.com/taesiri/ZoomIsAllYouNeed) - [Image Browser](https://huggingface.co/spaces/taesiri/ImageNet-Hard-Browser)
## Dataset Summary
**ImageNet-Hard** is a new benchmark that comprises 10,980 images collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). This dataset poses a significant challenge to state-of-the-art vision models as merely zooming in often fails to improve their ability to classify images correctly. As a result, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving a mere `2.02%` accuracy.
*ImageNet-Hard-4K*: For the 4K version please refere to [this dataset](https://huggingface.co/datasets/taesiri/imagenet-hard-4K).
### Dataset Distribution

### Classifiers Performance
| Model | Accuracy |
| ------------------- | -------- |
| AlexNet | 7.34 |
| VGG-16 | 12.00 |
| ResNet-18 | 10.86 |
| ResNet-50 | 14.74 |
| ViT-B/32 | 18.52 |
| EfficientNet-B0 | 16.57 |
| EfficientNet-B7 | 23.20 |
| EfficientNet-L2-Ns | 39.00 |
| CLIP-ViT-L/14@224px | 1.86 |
| CLIP-ViT-L/14@336px | 2.02 |
| OpenCLIP-ViT-bigG-14| 15.93 |
| OpenCLIP-ViT-L-14 | 15.60 |
**Evaluation Code**
* CLIP <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Prompt_Engineering_for_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
* [OpenCLIP](https://github.com/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/benchmark_openclip.py)
* Other models <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Benchmark_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
## Supported Tasks
- `image-classification`: The objective of this task is to classify an image into one or more classes, selected from 1000 ImageNet categories (allowing for multiple ground-truth labels per image).
## Languages
The `english_label` field in the dataset are in English.
## Dataset Structure
Data Instances
An example looks like this:
```python
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=575x409 at 0x7F09456B53A0>,
'label': [0],
'origin': 'imagenet_sketch',
'english_label': ['tench']
}
```
### Data Fields
The data instances have the following fields:
- image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- label: A List[int] collection containing the ground-truth ids.
- origin: A string containing source dataset.
- english_label: A List[str] collection containg the english labels for the ground-truth classes.
<details>
<summary>
Click here to see the full list of ImageNet class labels mapping:
</summary>
|id|Class|
|--|-----|
|0 | tench, Tinca tinca|
|1 | goldfish, Carassius auratus|
|2 | great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias|
|3 | tiger shark, Galeocerdo cuvieri|
|4 | hammerhead, hammerhead shark|
|5 | electric ray, crampfish, numbfish, torpedo|
|6 | stingray|
|7 | cock|
|8 | hen|
|9 | ostrich, Struthio camelus|
|10 | brambling, Fringilla montifringilla|
|11 | goldfinch, Carduelis carduelis|
|12 | house finch, linnet, Carpodacus mexicanus|
|13 | junco, snowbird|
|14 | indigo bunting, indigo finch, indigo bird, Passerina cyanea|
|15 | robin, American robin, Turdus migratorius|
|16 | bulbul|
|17 | jay|
|18 | magpie|
|19 | chickadee|
|20 | water ouzel, dipper|
|21 | kite|
|22 | bald eagle, American eagle, Haliaeetus leucocephalus|
|23 | vulture|
|24 | great grey owl, great gray owl, Strix nebulosa|
|25 | European fire salamander, Salamandra salamandra|
|26 | common newt, Triturus vulgaris|
|27 | eft|
|28 | spotted salamander, Ambystoma maculatum|
|29 | axolotl, mud puppy, Ambystoma mexicanum|
|30 | bullfrog, Rana catesbeiana|
|31 | tree frog, tree-frog|
|32 | tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui|
|33 | loggerhead, loggerhead turtle, Caretta caretta|
|34 | leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea|
|35 | mud turtle|
|36 | terrapin|
|37 | box turtle, box tortoise|
|38 | banded gecko|
|39 | common iguana, iguana, Iguana iguana|
|40 | American chameleon, anole, Anolis carolinensis|
|41 | whiptail, whiptail lizard|
|42 | agama|
|43 | frilled lizard, Chlamydosaurus kingi|
|44 | alligator lizard|
|45 | Gila monster, Heloderma suspectum|
|46 | green lizard, Lacerta viridis|
|47 | African chameleon, Chamaeleo chamaeleon|
|48 | Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis|
|49 | African crocodile, Nile crocodile, Crocodylus niloticus|
|50 | American alligator, Alligator mississipiensis|
|51 | triceratops|
|52 | thunder snake, worm snake, Carphophis amoenus|
|53 | ringneck snake, ring-necked snake, ring snake|
|54 | hognose snake, puff adder, sand viper|
|55 | green snake, grass snake|
|56 | king snake, kingsnake|
|57 | garter snake, grass snake|
|58 | water snake|
|59 | vine snake|
|60 | night snake, Hypsiglena torquata|
|61 | boa constrictor, Constrictor constrictor|
|62 | rock python, rock snake, Python sebae|
|63 | Indian cobra, Naja naja|
|64 | green mamba|
|65 | sea snake|
|66 | horned viper, cerastes, sand viper, horned asp, Cerastes cornutus|
|67 | diamondback, diamondback rattlesnake, Crotalus adamanteus|
|68 | sidewinder, horned rattlesnake, Crotalus cerastes|
|69 | trilobite|
|70 | harvestman, daddy longlegs, Phalangium opilio|
|71 | scorpion|
|72 | black and gold garden spider, Argiope aurantia|
|73 | barn spider, Araneus cavaticus|
|74 | garden spider, Aranea diademata|
|75 | black widow, Latrodectus mactans|
|76 | tarantula|
|77 | wolf spider, hunting spider|
|78 | tick|
|79 | centipede|
|80 | black grouse|
|81 | ptarmigan|
|82 | ruffed grouse, partridge, Bonasa umbellus|
|83 | prairie chicken, prairie grouse, prairie fowl|
|84 | peacock|
|85 | quail|
|86 | partridge|
|87 | African grey, African gray, Psittacus erithacus|
|88 | macaw|
|89 | sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita|
|90 | lorikeet|
|91 | coucal|
|92 | bee eater|
|93 | hornbill|
|94 | hummingbird|
|95 | jacamar|
|96 | toucan|
|97 | drake|
|98 | red-breasted merganser, Mergus serrator|
|99 | goose|
|100 | black swan, Cygnus atratus|
|101 | tusker|
|102 | echidna, spiny anteater, anteater|
|103 | platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus|
|104 | wallaby, brush kangaroo|
|105 | koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus|
|106 | wombat|
|107 | jellyfish|
|108 | sea anemone, anemone|
|109 | brain coral|
|110 | flatworm, platyhelminth|
|111 | nematode, nematode worm, roundworm|
|112 | conch|
|113 | snail|
|114 | slug|
|115 | sea slug, nudibranch|
|116 | chiton, coat-of-mail shell, sea cradle, polyplacophore|
|117 | chambered nautilus, pearly nautilus, nautilus|
|118 | Dungeness crab, Cancer magister|
|119 | rock crab, Cancer irroratus|
|120 | fiddler crab|
|121 | king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica|
|122 | American lobster, Northern lobster, Maine lobster, Homarus americanus|
|123 | spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish|
|124 | crayfish, crawfish, crawdad, crawdaddy|
|125 | hermit crab|
|126 | isopod|
|127 | white stork, Ciconia ciconia|
|128 | black stork, Ciconia nigra|
|129 | spoonbill|
|130 | flamingo|
|131 | little blue heron, Egretta caerulea|
|132 | American egret, great white heron, Egretta albus|
|133 | bittern|
|134 | crane|
|135 | limpkin, Aramus pictus|
|136 | European gallinule, Porphyrio porphyrio|
|137 | American coot, marsh hen, mud hen, water hen, Fulica americana|
|138 | bustard|
|139 | ruddy turnstone, Arenaria interpres|
|140 | red-backed sandpiper, dunlin, Erolia alpina|
|141 | redshank, Tringa totanus|
|142 | dowitcher|
|143 | oystercatcher, oyster catcher|
|144 | pelican|
|145 | king penguin, Aptenodytes patagonica|
|146 | albatross, mollymawk|
|147 | grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus|
|148 | killer whale, killer, orca, grampus, sea wolf, Orcinus orca|
|149 | dugong, Dugong dugon|
|150 | sea lion|
|151 | Chihuahua|
|152 | Japanese spaniel|
|153 | Maltese dog, Maltese terrier, Maltese|
|154 | Pekinese, Pekingese, Peke|
|155 | Shih-Tzu|
|156 | Blenheim spaniel|
|157 | papillon|
|158 | toy terrier|
|159 | Rhodesian ridgeback|
|160 | Afghan hound, Afghan|
|161 | basset, basset hound|
|162 | beagle|
|163 | bloodhound, sleuthhound|
|164 | bluetick|
|165 | black-and-tan coonhound|
|166 | Walker hound, Walker foxhound|
|167 | English foxhound|
|168 | redbone|
|169 | borzoi, Russian wolfhound|
|170 | Irish wolfhound|
|171 | Italian greyhound|
|172 | whippet|
|173 | Ibizan hound, Ibizan Podenco|
|174 | Norwegian elkhound, elkhound|
|175 | otterhound, otter hound|
|176 | Saluki, gazelle hound|
|177 | Scottish deerhound, deerhound|
|178 | Weimaraner|
|179 | Staffordshire bullterrier, Staffordshire bull terrier|
|180 | American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier|
|181 | Bedlington terrier|
|182 | Border terrier|
|183 | Kerry blue terrier|
|184 | Irish terrier|
|185 | Norfolk terrier|
|186 | Norwich terrier|
|187 | Yorkshire terrier|
|188 | wire-haired fox terrier|
|189 | Lakeland terrier|
|190 | Sealyham terrier, Sealyham|
|191 | Airedale, Airedale terrier|
|192 | cairn, cairn terrier|
|193 | Australian terrier|
|194 | Dandie Dinmont, Dandie Dinmont terrier|
|195 | Boston bull, Boston terrier|
|196 | miniature schnauzer|
|197 | giant schnauzer|
|198 | standard schnauzer|
|199 | Scotch terrier, Scottish terrier, Scottie|
|200 | Tibetan terrier, chrysanthemum dog|
|201 | silky terrier, Sydney silky|
|202 | soft-coated wheaten terrier|
|203 | West Highland white terrier|
|204 | Lhasa, Lhasa apso|
|205 | flat-coated retriever|
|206 | curly-coated retriever|
|207 | golden retriever|
|208 | Labrador retriever|
|209 | Chesapeake Bay retriever|
|210 | German short-haired pointer|
|211 | vizsla, Hungarian pointer|
|212 | English setter|
|213 | Irish setter, red setter|
|214 | Gordon setter|
|215 | Brittany spaniel|
|216 | clumber, clumber spaniel|
|217 | English springer, English springer spaniel|
|218 | Welsh springer spaniel|
|219 | cocker spaniel, English cocker spaniel, cocker|
|220 | Sussex spaniel|
|221 | Irish water spaniel|
|222 | kuvasz|
|223 | schipperke|
|224 | groenendael|
|225 | malinois|
|226 | briard|
|227 | kelpie|
|228 | komondor|
|229 | Old English sheepdog, bobtail|
|230 | Shetland sheepdog, Shetland sheep dog, Shetland|
|231 | collie|
|232 | Border collie|
|233 | Bouvier des Flandres, Bouviers des Flandres|
|234 | Rottweiler|
|235 | German shepherd, German shepherd dog, German police dog, alsatian|
|236 | Doberman, Doberman pinscher|
|237 | miniature pinscher|
|238 | Greater Swiss Mountain dog|
|239 | Bernese mountain dog|
|240 | Appenzeller|
|241 | EntleBucher|
|242 | boxer|
|243 | bull mastiff|
|244 | Tibetan mastiff|
|245 | French bulldog|
|246 | Great Dane|
|247 | Saint Bernard, St Bernard|
|248 | Eskimo dog, husky|
|249 | malamute, malemute, Alaskan malamute|
|250 | Siberian husky|
|251 | dalmatian, coach dog, carriage dog|
|252 | affenpinscher, monkey pinscher, monkey dog|
|253 | basenji|
|254 | pug, pug-dog|
|255 | Leonberg|
|256 | Newfoundland, Newfoundland dog|
|257 | Great Pyrenees|
|258 | Samoyed, Samoyede|
|259 | Pomeranian|
|260 | chow, chow chow|
|261 | keeshond|
|262 | Brabancon griffon|
|263 | Pembroke, Pembroke Welsh corgi|
|264 | Cardigan, Cardigan Welsh corgi|
|265 | toy poodle|
|266 | miniature poodle|
|267 | standard poodle|
|268 | Mexican hairless|
|269 | timber wolf, grey wolf, gray wolf, Canis lupus|
|270 | white wolf, Arctic wolf, Canis lupus tundrarum|
|271 | red wolf, maned wolf, Canis rufus, Canis niger|
|272 | coyote, prairie wolf, brush wolf, Canis latrans|
|273 | dingo, warrigal, warragal, Canis dingo|
|274 | dhole, Cuon alpinus|
|275 | African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus|
|276 | hyena, hyaena|
|277 | red fox, Vulpes vulpes|
|278 | kit fox, Vulpes macrotis|
|279 | Arctic fox, white fox, Alopex lagopus|
|280 | grey fox, gray fox, Urocyon cinereoargenteus|
|281 | tabby, tabby cat|
|282 | tiger cat|
|283 | Persian cat|
|284 | Siamese cat, Siamese|
|285 | Egyptian cat|
|286 | cougar, puma, catamount, mountain lion, painter, panther, Felis concolor|
|287 | lynx, catamount|
|288 | leopard, Panthera pardus|
|289 | snow leopard, ounce, Panthera uncia|
|290 | jaguar, panther, Panthera onca, Felis onca|
|291 | lion, king of beasts, Panthera leo|
|292 | tiger, Panthera tigris|
|293 | cheetah, chetah, Acinonyx jubatus|
|294 | brown bear, bruin, Ursus arctos|
|295 | American black bear, black bear, Ursus americanus, Euarctos americanus|
|296 | ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus|
|297 | sloth bear, Melursus ursinus, Ursus ursinus|
|298 | mongoose|
|299 | meerkat, mierkat|
|300 | tiger beetle|
|301 | ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle|
|302 | ground beetle, carabid beetle|
|303 | long-horned beetle, longicorn, longicorn beetle|
|304 | leaf beetle, chrysomelid|
|305 | dung beetle|
|306 | rhinoceros beetle|
|307 | weevil|
|308 | fly|
|309 | bee|
|310 | ant, emmet, pismire|
|311 | grasshopper, hopper|
|312 | cricket|
|313 | walking stick, walkingstick, stick insect|
|314 | cockroach, roach|
|315 | mantis, mantid|
|316 | cicada, cicala|
|317 | leafhopper|
|318 | lacewing, lacewing fly|
|319 | dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk|
|320 | damselfly|
|321 | admiral|
|322 | ringlet, ringlet butterfly|
|323 | monarch, monarch butterfly, milkweed butterfly, Danaus plexippus|
|324 | cabbage butterfly|
|325 | sulphur butterfly, sulfur butterfly|
|326 | lycaenid, lycaenid butterfly|
|327 | starfish, sea star|
|328 | sea urchin|
|329 | sea cucumber, holothurian|
|330 | wood rabbit, cottontail, cottontail rabbit|
|331 | hare|
|332 | Angora, Angora rabbit|
|333 | hamster|
|334 | porcupine, hedgehog|
|335 | fox squirrel, eastern fox squirrel, Sciurus niger|
|336 | marmot|
|337 | beaver|
|338 | guinea pig, Cavia cobaya|
|339 | sorrel|
|340 | zebra|
|341 | hog, pig, grunter, squealer, Sus scrofa|
|342 | wild boar, boar, Sus scrofa|
|343 | warthog|
|344 | hippopotamus, hippo, river horse, Hippopotamus amphibius|
|345 | ox|
|346 | water buffalo, water ox, Asiatic buffalo, Bubalus bubalis|
|347 | bison|
|348 | ram, tup|
|349 | bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis|
|350 | ibex, Capra ibex|
|351 | hartebeest|
|352 | impala, Aepyceros melampus|
|353 | gazelle|
|354 | Arabian camel, dromedary, Camelus dromedarius|
|355 | llama|
|356 | weasel|
|357 | mink|
|358 | polecat, fitch, foulmart, foumart, Mustela putorius|
|359 | black-footed ferret, ferret, Mustela nigripes|
|360 | otter|
|361 | skunk, polecat, wood pussy|
|362 | badger|
|363 | armadillo|
|364 | three-toed sloth, ai, Bradypus tridactylus|
|365 | orangutan, orang, orangutang, Pongo pygmaeus|
|366 | gorilla, Gorilla gorilla|
|367 | chimpanzee, chimp, Pan troglodytes|
|368 | gibbon, Hylobates lar|
|369 | siamang, Hylobates syndactylus, Symphalangus syndactylus|
|370 | guenon, guenon monkey|
|371 | patas, hussar monkey, Erythrocebus patas|
|372 | baboon|
|373 | macaque|
|374 | langur|
|375 | colobus, colobus monkey|
|376 | proboscis monkey, Nasalis larvatus|
|377 | marmoset|
|378 | capuchin, ringtail, Cebus capucinus|
|379 | howler monkey, howler|
|380 | titi, titi monkey|
|381 | spider monkey, Ateles geoffroyi|
|382 | squirrel monkey, Saimiri sciureus|
|383 | Madagascar cat, ring-tailed lemur, Lemur catta|
|384 | indri, indris, Indri indri, Indri brevicaudatus|
|385 | Indian elephant, Elephas maximus|
|386 | African elephant, Loxodonta africana|
|387 | lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens|
|388 | giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca|
|389 | barracouta, snoek|
|390 | eel|
|391 | coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch|
|392 | rock beauty, Holocanthus tricolor|
|393 | anemone fish|
|394 | sturgeon|
|395 | gar, garfish, garpike, billfish, Lepisosteus osseus|
|396 | lionfish|
|397 | puffer, pufferfish, blowfish, globefish|
|398 | abacus|
|399 | abaya|
|400 | academic gown, academic robe, judge's robe|
|401 | accordion, piano accordion, squeeze box|
|402 | acoustic guitar|
|403 | aircraft carrier, carrier, flattop, attack aircraft carrier|
|404 | airliner|
|405 | airship, dirigible|
|406 | altar|
|407 | ambulance|
|408 | amphibian, amphibious vehicle|
|409 | analog clock|
|410 | apiary, bee house|
|411 | apron|
|412 | ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin|
|413 | assault rifle, assault gun|
|414 | backpack, back pack, knapsack, packsack, rucksack, haversack|
|415 | bakery, bakeshop, bakehouse|
|416 | balance beam, beam|
|417 | balloon|
|418 | ballpoint, ballpoint pen, ballpen, Biro|
|419 | Band Aid|
|420 | banjo|
|421 | bannister, banister, balustrade, balusters, handrail|
|422 | barbell|
|423 | barber chair|
|424 | barbershop|
|425 | barn|
|426 | barometer|
|427 | barrel, cask|
|428 | barrow, garden cart, lawn cart, wheelbarrow|
|429 | baseball|
|430 | basketball|
|431 | bassinet|
|432 | bassoon|
|433 | bathing cap, swimming cap|
|434 | bath towel|
|435 | bathtub, bathing tub, bath, tub|
|436 | beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon|
|437 | beacon, lighthouse, beacon light, pharos|
|438 | beaker|
|439 | bearskin, busby, shako|
|440 | beer bottle|
|441 | beer glass|
|442 | bell cote, bell cot|
|443 | bib|
|444 | bicycle-built-for-two, tandem bicycle, tandem|
|445 | bikini, two-piece|
|446 | binder, ring-binder|
|447 | binoculars, field glasses, opera glasses|
|448 | birdhouse|
|449 | boathouse|
|450 | bobsled, bobsleigh, bob|
|451 | bolo tie, bolo, bola tie, bola|
|452 | bonnet, poke bonnet|
|453 | bookcase|
|454 | bookshop, bookstore, bookstall|
|455 | bottlecap|
|456 | bow|
|457 | bow tie, bow-tie, bowtie|
|458 | brass, memorial tablet, plaque|
|459 | brassiere, bra, bandeau|
|460 | breakwater, groin, groyne, mole, bulwark, seawall, jetty|
|461 | breastplate, aegis, egis|
|462 | broom|
|463 | bucket, pail|
|464 | buckle|
|465 | bulletproof vest|
|466 | bullet train, bullet|
|467 | butcher shop, meat market|
|468 | cab, hack, taxi, taxicab|
|469 | caldron, cauldron|
|470 | candle, taper, wax light|
|471 | cannon|
|472 | canoe|
|473 | can opener, tin opener|
|474 | cardigan|
|475 | car mirror|
|476 | carousel, carrousel, merry-go-round, roundabout, whirligig|
|477 | carpenter's kit, tool kit|
|478 | carton|
|479 | car wheel|
|480 | cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM|
|481 | cassette|
|482 | cassette player|
|483 | castle|
|484 | catamaran|
|485 | CD player|
|486 | cello, violoncello|
|487 | cellular telephone, cellular phone, cellphone, cell, mobile phone|
|488 | chain|
|489 | chainlink fence|
|490 | chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour|
|491 | chain saw, chainsaw|
|492 | chest|
|493 | chiffonier, commode|
|494 | chime, bell, gong|
|495 | china cabinet, china closet|
|496 | Christmas stocking|
|497 | church, church building|
|498 | cinema, movie theater, movie theatre, movie house, picture palace|
|499 | cleaver, meat cleaver, chopper|
|500 | cliff dwelling|
|501 | cloak|
|502 | clog, geta, patten, sabot|
|503 | cocktail shaker|
|504 | coffee mug|
|505 | coffeepot|
|506 | coil, spiral, volute, whorl, helix|
|507 | combination lock|
|508 | computer keyboard, keypad|
|509 | confectionery, confectionary, candy store|
|510 | container ship, containership, container vessel|
|511 | convertible|
|512 | corkscrew, bottle screw|
|513 | cornet, horn, trumpet, trump|
|514 | cowboy boot|
|515 | cowboy hat, ten-gallon hat|
|516 | cradle|
|517 | crane_1|
|518 | crash helmet|
|519 | crate|
|520 | crib, cot|
|521 | Crock Pot|
|522 | croquet ball|
|523 | crutch|
|524 | cuirass|
|525 | dam, dike, dyke|
|526 | desk|
|527 | desktop computer|
|528 | dial telephone, dial phone|
|529 | diaper, nappy, napkin|
|530 | digital clock|
|531 | digital watch|
|532 | dining table, board|
|533 | dishrag, dishcloth|
|534 | dishwasher, dish washer, dishwashing machine|
|535 | disk brake, disc brake|
|536 | dock, dockage, docking facility|
|537 | dogsled, dog sled, dog sleigh|
|538 | dome|
|539 | doormat, welcome mat|
|540 | drilling platform, offshore rig|
|541 | drum, membranophone, tympan|
|542 | drumstick|
|543 | dumbbell|
|544 | Dutch oven|
|545 | electric fan, blower|
|546 | electric guitar|
|547 | electric locomotive|
|548 | entertainment center|
|549 | envelope|
|550 | espresso maker|
|551 | face powder|
|552 | feather boa, boa|
|553 | file, file cabinet, filing cabinet|
|554 | fireboat|
|555 | fire engine, fire truck|
|556 | fire screen, fireguard|
|557 | flagpole, flagstaff|
|558 | flute, transverse flute|
|559 | folding chair|
|560 | football helmet|
|561 | forklift|
|562 | fountain|
|563 | fountain pen|
|564 | four-poster|
|565 | freight car|
|566 | French horn, horn|
|567 | frying pan, frypan, skillet|
|568 | fur coat|
|569 | garbage truck, dustcart|
|570 | gasmask, respirator, gas helmet|
|571 | gas pump, gasoline pump, petrol pump, island dispenser|
|572 | goblet|
|573 | go-kart|
|574 | golf ball|
|575 | golfcart, golf cart|
|576 | gondola|
|577 | gong, tam-tam|
|578 | gown|
|579 | grand piano, grand|
|580 | greenhouse, nursery, glasshouse|
|581 | grille, radiator grille|
|582 | grocery store, grocery, food market, market|
|583 | guillotine|
|584 | hair slide|
|585 | hair spray|
|586 | half track|
|587 | hammer|
|588 | hamper|
|589 | hand blower, blow dryer, blow drier, hair dryer, hair drier|
|590 | hand-held computer, hand-held microcomputer|
|591 | handkerchief, hankie, hanky, hankey|
|592 | hard disc, hard disk, fixed disk|
|593 | harmonica, mouth organ, harp, mouth harp|
|594 | harp|
|595 | harvester, reaper|
|596 | hatchet|
|597 | holster|
|598 | home theater, home theatre|
|599 | honeycomb|
|600 | hook, claw|
|601 | hoopskirt, crinoline|
|602 | horizontal bar, high bar|
|603 | horse cart, horse-cart|
|604 | hourglass|
|605 | iPod|
|606 | iron, smoothing iron|
|607 | jack-o'-lantern|
|608 | jean, blue jean, denim|
|609 | jeep, landrover|
|610 | jersey, T-shirt, tee shirt|
|611 | jigsaw puzzle|
|612 | jinrikisha, ricksha, rickshaw|
|613 | joystick|
|614 | kimono|
|615 | knee pad|
|616 | knot|
|617 | lab coat, laboratory coat|
|618 | ladle|
|619 | lampshade, lamp shade|
|620 | laptop, laptop computer|
|621 | lawn mower, mower|
|622 | lens cap, lens cover|
|623 | letter opener, paper knife, paperknife|
|624 | library|
|625 | lifeboat|
|626 | lighter, light, igniter, ignitor|
|627 | limousine, limo|
|628 | liner, ocean liner|
|629 | lipstick, lip rouge|
|630 | Loafer|
|631 | lotion|
|632 | loudspeaker, speaker, speaker unit, loudspeaker system, speaker system|
|633 | loupe, jeweler's loupe|
|634 | lumbermill, sawmill|
|635 | magnetic compass|
|636 | mailbag, postbag|
|637 | mailbox, letter box|
|638 | maillot|
|639 | maillot, tank suit|
|640 | manhole cover|
|641 | maraca|
|642 | marimba, xylophone|
|643 | mask|
|644 | matchstick|
|645 | maypole|
|646 | maze, labyrinth|
|647 | measuring cup|
|648 | medicine chest, medicine cabinet|
|649 | megalith, megalithic structure|
|650 | microphone, mike|
|651 | microwave, microwave oven|
|652 | military uniform|
|653 | milk can|
|654 | minibus|
|655 | miniskirt, mini|
|656 | minivan|
|657 | missile|
|658 | mitten|
|659 | mixing bowl|
|660 | mobile home, manufactured home|
|661 | Model T|
|662 | modem|
|663 | monastery|
|664 | monitor|
|665 | moped|
|666 | mortar|
|667 | mortarboard|
|668 | mosque|
|669 | mosquito net|
|670 | motor scooter, scooter|
|671 | mountain bike, all-terrain bike, off-roader|
|672 | mountain tent|
|673 | mouse, computer mouse|
|674 | mousetrap|
|675 | moving van|
|676 | muzzle|
|677 | nail|
|678 | neck brace|
|679 | necklace|
|680 | nipple|
|681 | notebook, notebook computer|
|682 | obelisk|
|683 | oboe, hautboy, hautbois|
|684 | ocarina, sweet potato|
|685 | odometer, hodometer, mileometer, milometer|
|686 | oil filter|
|687 | organ, pipe organ|
|688 | oscilloscope, scope, cathode-ray oscilloscope, CRO|
|689 | overskirt|
|690 | oxcart|
|691 | oxygen mask|
|692 | packet|
|693 | paddle, boat paddle|
|694 | paddlewheel, paddle wheel|
|695 | padlock|
|696 | paintbrush|
|697 | pajama, pyjama, pj's, jammies|
|698 | palace|
|699 | panpipe, pandean pipe, syrinx|
|700 | paper towel|
|701 | parachute, chute|
|702 | parallel bars, bars|
|703 | park bench|
|704 | parking meter|
|705 | passenger car, coach, carriage|
|706 | patio, terrace|
|707 | pay-phone, pay-station|
|708 | pedestal, plinth, footstall|
|709 | pencil box, pencil case|
|710 | pencil sharpener|
|711 | perfume, essence|
|712 | Petri dish|
|713 | photocopier|
|714 | pick, plectrum, plectron|
|715 | pickelhaube|
|716 | picket fence, paling|
|717 | pickup, pickup truck|
|718 | pier|
|719 | piggy bank, penny bank|
|720 | pill bottle|
|721 | pillow|
|722 | ping-pong ball|
|723 | pinwheel|
|724 | pirate, pirate ship|
|725 | pitcher, ewer|
|726 | plane, carpenter's plane, woodworking plane|
|727 | planetarium|
|728 | plastic bag|
|729 | plate rack|
|730 | plow, plough|
|731 | plunger, plumber's helper|
|732 | Polaroid camera, Polaroid Land camera|
|733 | pole|
|734 | police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria|
|735 | poncho|
|736 | pool table, billiard table, snooker table|
|737 | pop bottle, soda bottle|
|738 | pot, flowerpot|
|739 | potter's wheel|
|740 | power drill|
|741 | prayer rug, prayer mat|
|742 | printer|
|743 | prison, prison house|
|744 | projectile, missile|
|745 | projector|
|746 | puck, hockey puck|
|747 | punching bag, punch bag, punching ball, punchball|
|748 | purse|
|749 | quill, quill pen|
|750 | quilt, comforter, comfort, puff|
|751 | racer, race car, racing car|
|752 | racket, racquet|
|753 | radiator|
|754 | radio, wireless|
|755 | radio telescope, radio reflector|
|756 | rain barrel|
|757 | recreational vehicle, RV, R.V.|
|758 | reel|
|759 | reflex camera|
|760 | refrigerator, icebox|
|761 | remote control, remote|
|762 | restaurant, eating house, eating place, eatery|
|763 | revolver, six-gun, six-shooter|
|764 | rifle|
|765 | rocking chair, rocker|
|766 | rotisserie|
|767 | rubber eraser, rubber, pencil eraser|
|768 | rugby ball|
|769 | rule, ruler|
|770 | running shoe|
|771 | safe|
|772 | safety pin|
|773 | saltshaker, salt shaker|
|774 | sandal|
|775 | sarong|
|776 | sax, saxophone|
|777 | scabbard|
|778 | scale, weighing machine|
|779 | school bus|
|780 | schooner|
|781 | scoreboard|
|782 | screen, CRT screen|
|783 | screw|
|784 | screwdriver|
|785 | seat belt, seatbelt|
|786 | sewing machine|
|787 | shield, buckler|
|788 | shoe shop, shoe-shop, shoe store|
|789 | shoji|
|790 | shopping basket|
|791 | shopping cart|
|792 | shovel|
|793 | shower cap|
|794 | shower curtain|
|795 | ski|
|796 | ski mask|
|797 | sleeping bag|
|798 | slide rule, slipstick|
|799 | sliding door|
|800 | slot, one-armed bandit|
|801 | snorkel|
|802 | snowmobile|
|803 | snowplow, snowplough|
|804 | soap dispenser|
|805 | soccer ball|
|806 | sock|
|807 | solar dish, solar collector, solar furnace|
|808 | sombrero|
|809 | soup bowl|
|810 | space bar|
|811 | space heater|
|812 | space shuttle|
|813 | spatula|
|814 | speedboat|
|815 | spider web, spider's web|
|816 | spindle|
|817 | sports car, sport car|
|818 | spotlight, spot|
|819 | stage|
|820 | steam locomotive|
|821 | steel arch bridge|
|822 | steel drum|
|823 | stethoscope|
|824 | stole|
|825 | stone wall|
|826 | stopwatch, stop watch|
|827 | stove|
|828 | strainer|
|829 | streetcar, tram, tramcar, trolley, trolley car|
|830 | stretcher|
|831 | studio couch, day bed|
|832 | stupa, tope|
|833 | submarine, pigboat, sub, U-boat|
|834 | suit, suit of clothes|
|835 | sundial|
|836 | sunglass|
|837 | sunglasses, dark glasses, shades|
|838 | sunscreen, sunblock, sun blocker|
|839 | suspension bridge|
|840 | swab, swob, mop|
|841 | sweatshirt|
|842 | swimming trunks, bathing trunks|
|843 | swing|
|844 | switch, electric switch, electrical switch|
|845 | syringe|
|846 | table lamp|
|847 | tank, army tank, armored combat vehicle, armoured combat vehicle|
|848 | tape player|
|849 | teapot|
|850 | teddy, teddy bear|
|851 | television, television system|
|852 | tennis ball|
|853 | thatch, thatched roof|
|854 | theater curtain, theatre curtain|
|855 | thimble|
|856 | thresher, thrasher, threshing machine|
|857 | throne|
|858 | tile roof|
|859 | toaster|
|860 | tobacco shop, tobacconist shop, tobacconist|
|861 | toilet seat|
|862 | torch|
|863 | totem pole|
|864 | tow truck, tow car, wrecker|
|865 | toyshop|
|866 | tractor|
|867 | trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi|
|868 | tray|
|869 | trench coat|
|870 | tricycle, trike, velocipede|
|871 | trimaran|
|872 | tripod|
|873 | triumphal arch|
|874 | trolleybus, trolley coach, trackless trolley|
|875 | trombone|
|876 | tub, vat|
|877 | turnstile|
|878 | typewriter keyboard|
|879 | umbrella|
|880 | unicycle, monocycle|
|881 | upright, upright piano|
|882 | vacuum, vacuum cleaner|
|883 | vase|
|884 | vault|
|885 | velvet|
|886 | vending machine|
|887 | vestment|
|888 | viaduct|
|889 | violin, fiddle|
|890 | volleyball|
|891 | waffle iron|
|892 | wall clock|
|893 | wallet, billfold, notecase, pocketbook|
|894 | wardrobe, closet, press|
|895 | warplane, military plane|
|896 | washbasin, handbasin, washbowl, lavabo, wash-hand basin|
|897 | washer, automatic washer, washing machine|
|898 | water bottle|
|899 | water jug|
|900 | water tower|
|901 | whiskey jug|
|902 | whistle|
|903 | wig|
|904 | window screen|
|905 | window shade|
|906 | Windsor tie|
|907 | wine bottle|
|908 | wing|
|909 | wok|
|910 | wooden spoon|
|911 | wool, woolen, woollen|
|912 | worm fence, snake fence, snake-rail fence, Virginia fence|
|913 | wreck|
|914 | yawl|
|915 | yurt|
|916 | web site, website, internet site, site|
|917 | comic book|
|918 | crossword puzzle, crossword|
|919 | street sign|
|920 | traffic light, traffic signal, stoplight|
|921 | book jacket, dust cover, dust jacket, dust wrapper|
|922 | menu|
|923 | plate|
|924 | guacamole|
|925 | consomme|
|926 | hot pot, hotpot|
|927 | trifle|
|928 | ice cream, icecream|
|929 | ice lolly, lolly, lollipop, popsicle|
|930 | French loaf|
|931 | bagel, beigel|
|932 | pretzel|
|933 | cheeseburger|
|934 | hotdog, hot dog, red hot|
|935 | mashed potato|
|936 | head cabbage|
|937 | broccoli|
|938 | cauliflower|
|939 | zucchini, courgette|
|940 | spaghetti squash|
|941 | acorn squash|
|942 | butternut squash|
|943 | cucumber, cuke|
|944 | artichoke, globe artichoke|
|945 | bell pepper|
|946 | cardoon|
|947 | mushroom|
|948 | Granny Smith|
|949 | strawberry|
|950 | orange|
|951 | lemon|
|952 | fig|
|953 | pineapple, ananas|
|954 | banana|
|955 | jackfruit, jak, jack|
|956 | custard apple|
|957 | pomegranate|
|958 | hay|
|959 | carbonara|
|960 | chocolate sauce, chocolate syrup|
|961 | dough|
|962 | meat loaf, meatloaf|
|963 | pizza, pizza pie|
|964 | potpie|
|965 | burrito|
|966 | red wine|
|967 | espresso|
|968 | cup|
|969 | eggnog|
|970 | alp|
|971 | bubble|
|972 | cliff, drop, drop-off|
|973 | coral reef|
|974 | geyser|
|975 | lakeside, lakeshore|
|976 | promontory, headland, head, foreland|
|977 | sandbar, sand bar|
|978 | seashore, coast, seacoast, sea-coast|
|979 | valley, vale|
|980 | volcano|
|981 | ballplayer, baseball player|
|982 | groom, bridegroom|
|983 | scuba diver|
|984 | rapeseed|
|985 | daisy|
|986 | yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum|
|987 | corn|
|988 | acorn|
|989 | hip, rose hip, rosehip|
|990 | buckeye, horse chestnut, conker|
|991 | coral fungus|
|992 | agaric|
|993 | gyromitra|
|994 | stinkhorn, carrion fungus|
|995 | earthstar|
|996 | hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa|
|997 | bolete|
|998 | ear, spike, capitulum|
|999 | toilet tissue, toilet paper, bathroom tissue|
</details>
### Data Splits
This dataset is a validation-only set.
## Dataset Creation
### Source Data
This dataset is sourced from ImageNet, ImageNet-ReaL, ImageNet-V2, ImageNet-A, ImageNet-C, ImageNet-R, ImageNet-Sketch, and ObjectNet.
## Citation Information
```
@article{taesiri2023zoom,
title={ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification},
author={Taesiri, Mohammad Reza and Nguyen, Giang and Habchi, Sarra and Bezemer, Cor-Paul and Nguyen, Anh},
journal={arXiv preprint arXiv:2304.05538},
year={2023}
}
``` |
tiedong/goat | 2023-05-25T22:14:53.000Z | [
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"region:us"
] | tiedong | null | null | null | 16 | 87 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset.json file contains ~1.7 million synthetic data for arithmetic tasks, generated by dataset.ipynb.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
factored/fr-crawler-private-mlm | 2023-08-18T21:00:58.000Z | [
"region:us"
] | factored | null | null | null | 0 | 87 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': Data Engineer
'1': Data Analyst
'2': Data Scientist
'3': Machine Learning Engineer
'4': Software Engineer
'5': Analytics Engineer
'6': Full-Stack Engineer
'7': DevOps Engineer
'8': Research Assistant
'9': Data Architect
'10': Business Intelligence Developer
'11': Co-Founder
'12': Intern
'13': Risk Analyst
'14': Technical Lead
'15': Teaching Assistant
'16': Database Administrator
'17': Technical Leader
'18': Developer
'19': Consultant
'20': Researcher
'21': Senior Consultant
'22': Engineer
'23': BI Engineer
'24': Software Architect
'25': Professional Development
'26': Data Consultant
'27': Web Developer
'28': Blockchain Developer
'29': Operations Specialist
'30': Quantitative Analyst
'31': Statistician
'32': Data Management Analyst
'33': Statistical Consultant - United Nations Development Program - UNDP
'34': Financial analyst
'35': Data Mining Analyst
'36': Senior Machine learning engineer.
'37': Graduate Assistant
'38': ArcGIS Insights Product Manager
'39': Internship
'40': Lead Frontend Architect
'41': Marketplace Ops & Data Excellence Specialist
'42': Financial Analysts
'43': Lead Developer
'44': Senior Big Data Architect
'45': Postgres DBA
'46': Lecturer
'47': BI Intern
'48': Full Stack Engineer
'49': Professional Services Consultant
'50': Business Intelligence Consultant
'51': BI Developer Analyst, Makers Solutions SAS
'52': C++ Developer
'53': Functional Lead Sap BI
'54': Senior Marketing Analyst
'55': IT Analyst
'56': Graduate Teaching Assistant
'57': Digital Marketing
'58': University Lecturer
'59': Data Specialist
'60': DevOps
'61': Technology Assistant
'62': Civil Engineer
'63': Final Thesis Project
'64': Technical Cofounder
'65': Machine Learning Trainee
'66': Data Manager / Enterprise Architect
'67': BI Consultant
'68': Solutions Specialist
'69': Technical Leader Consultant
'70': Senior Engineering Manager
'71': BI & Data Specialist
'72': Professor
'73': Tech Lead
'74': Business Intelligence Manager
'75': Strategy Consultant
'76': Junior Trader FICs
'77': Front-End Developer
'78': Senior Developer
'79': Senior Integration Specialist
'80': Freelance Consultant
'81': Data Lead
'82': Semi-Senior Engineer
'83': Artificial Intelligence Researcher
'84': Junior Consultant
'85': Artificial Intelligence Specialist
'86': Teacher
'87': PS Consultant
'88': Research Intern
'89': Process Automation Team Leader
'90': Modeling Analyst
'91': Product Analyst
'92': RAPPI
'93': Operations Analyst I
'94': Systems Analyst
'95': PWC
'96': Adjunct Professor
'97': Six Sigma Project Consultant
'98': Computational Designer & Engineer
'99': Sr. Business Consultant
'100': Information Security Analyst
'101': Project Coordinator
'102': Support Analyst II
'103': Americas Retail Sales Analyst
'104': Integrations Developer
'105': CTO and Back-End Developer
'106': Chief Technology Officer
'107': Technology Consultant
'108': Head of Machine Learning
'109': Adjunct professor
'110': Production Analyst
'111': It Senior Analyst
'112': Database Administrator - Systems Analyst
'113': Bairesdev
'114': Business Support Senior Specialist
'115': Tech Lead for Real Time Time Data Systems
'116': Project Analyst
'117': Mobile Technology Consultant
'118': DevSecOps/DevOps
'119': Web Developer Intern
'120': Senior Data Lead
'121': Mobile And Web Developer
'122': Technical Support Specialist
'123': Financial Mathematics Lead
'124': Teacher Assistant
'125': Financial Analyst
'126': BI Developer, Business Intelligence Team
'127': Business Intelligence Intern
'128': BI Lead
'129': Senior Data Governance Specialist
'130': Architecture & Devops Lead
'131': Data Governance Analyst
'132': Associate Consultant
'133': Sr. Analyst
'134': Sr. Systems Analyst
'135': Sr DataLab Analyst
'136': System Analyst
'137': Big Data Architect
'138': IT Functional Analyst
'139': Young Researcher
'140': Freelance Developer
'141': Data Mining Analyst II
'142': Operations Maintenance Engineer
'143': Bi Tech Lead / Bi Dev
'144': Server Administrator
'145': Full Stack Engineer & Co-Founder
'146': Game Developer
'147': Devremote
'148': Technical Project Manager
'149': Mid Database Administrator
'150': Senior Data Manager
'151': OCC Analyst
'152': BI Senior Specialist
'153': Data Warehouse Architect
'154': Information Governance Analyst
'155': Sr. Data & Integrations Engineer
'156': Trainee - Onsite QA/QC Analyst
'157': Back-End Engineer
'158': Java Software Architect
'159': Lecturer and Research Engineer
'160': Director Of Knowledge Management
'161': Data Migration Analyst
'162': Freelancer/Contractor
'163': Undergraduate Researcher
'164': Colombia Data Warehouse Administrator
'165': Strategic Coordinator
'166': Frontend Developer
'167': Platform Engineer
'168': Commercial Analyst Business Development
'169': DBA Sql Server And Bi Analyst
'170': BI Specialist
'171': IT Infrastructure Engineer
'172': Sr. Multicloud Python Engineer
'173': Consultor De Sistemas
'174': Co-Founder and CTO
'175': Senior Devops Engineer
'176': Mid-Level Engineer
'177': Well Surveillance Engineer
'178': Machine Learning Tech Lead
'179': Pricing Analyst
'180': Author
'181': Consulting Analyst
'182': Python Trainer
'183': Machine Learning Research Intern
'184': Industry and Commerce Analyst
'185': Economics Advisor
'186': Senior Business Intelligence
'187': Product Development Analyst
'188': SAP BW / BO Consultant
'189': Junior Systems Engineer
'190': Freelance Computer Vision Engineer
'191': Research And Development Engineer
'192': Data Sourcing Tech Lead
'193': Freelancer
'194': Commercial Intern
'195': Co-Founder/ Back-End Engineer
'196': Co-Founder & Cto
'197': Project Engineer
'198': RPA Analyst
'199': Python Developer
'200': Technical/Functional Leader
'201': Head of IT
'202': Intern FICs
'203': Artificial Intelligence Intern
'204': Site Reliability Engineer Intern
'205': Graduate Teaching & Research Assistant
'206': Geospatial Analyst
'207': Digital Product Manager / Enterprise Architect
'208': Temporary Professor Computer Engineer
'209': Budget and Planning Leader
'210': IT Project Intern
'211': Logistics Analyst
'212': AmericaS Retail Sales Analyst
'213': Project Control And Budget Analyst
'214': Credit Risk Analyst
'215': TALENTU
'216': Business Process Improvement Analyst
'217': Product Owner
'218': Senior DevOps Engineer
'219': Big Data Senior Consultant
'220': Energy Analyst
'221': Software engineering consultant
'222': Senior Data Specialist
'223': Graduate Research Analyst Intern
'224': Financial Mathematics Senior Analyst
'225': Undergraduate Teaching Assistant (Multimedia Objects)
'226': Senior Back-End Developer
'227': Field Analyst
'228': BITECHCO
'229': Electronics Engineer
'230': System Administrator
'231': Seminar Assistant/Lecturer
'232': Data Leader
'233': Senior Platform Engineer
'234': Retention & Fidelization Leader
'235': Data Management Intern
'236': Commercial Department Assistant
'237': Research Advisor
'238': Full Stack Ruby On Rails Developer
'239': E-Commerce Operations Engineer
'240': Head of Predictive Modeling
'241': Cloud & Data Architect
'242': Industrial Network Engineer
'243': Production and Maintenance Intern
'244': Python Developer/ETL Engineer
'245': Project Leader
'246': Java Developer
'247': L2 Support Engineer, Customer Integration and Support
'248': Freelance Web Developer
'249': Undergraduate Research Assistant
'250': Project Research Professional
'251': IT Operations Specialist
'252': It Consultant
'253': AWS DevOps Engineer
'254': Procurement Specialist
'255': Geoscientist
'256': Productivity Analyst And Coach
'257': Product Designer
'258': Electrical Engineer Intern
'259': Financial Business Intelligence Intern
'260': Civil Enginner
'261': Revenue Management Analyst
'262': IT Specialist
'263': Data Governance Engineer
'264': Instructor
'265': Project Management Intern
'266': Technical Product Owner
'267': Cloud Developer
'268': Cloud Python Developer
'269': Bi Analyst
'270': Infrastructure Intern
'271': Analyst Specialist
'272': Technology Intern
'273': Machine Learning Researcher
'274': Internal Researcher
'275': Senior Integration Engineer
'276': PHP/ Python Developer
'277': Systems Architecture and Data Administration Assistant
'278': JR Research Assistant at Apolo Scientific Computing Center
'279': Financial & Statistics Advisor
'280': Data Sciencist
'281': Jr Actuary
'282': Data Integrations Engineer
'283': Innovations Intern
'284': Regional Credit + Accounts Receivable Manager Assistant
'285': Billing Analyst
'286': Engineering Intern
'287': Trainee Engineer
'288': Security Specialist
'289': Compliance Analyst
'290': Student Researcher
'291': Graduated Research Assistant
'292': Analyst Intern
'293': Sap Consultant
'294': Administrative Financial Analyst
'295': R&D Chemical Laboratory Analyst
'296': Design Engineering Intern
'297': Senior Business Intelligence Developer
'298': Industry and Commerce Intern
'299': Local Map Operations Analyst II
'300': Project Assistant for the Audit Department
'301': Software and Automation Engineer
'302': Customer Intelligence Analyst
'303': Pricing Lead
'304': Technical Team Leader
'305': Pre-Professional Paid Intern
'306': Laboratory Assistant
'307': Back-End Developer
'308': Intern Researcher
'309': Teacher and Researcher
'310': Biomedical Computer Vision Research Asst.
'311': CTO and Founder
'312': Programming Analyst
'313': Consultant on Customer Segmentation
'314': Credit Analyst
'315': Technical Analyst
'316': BI Teach Lead
'317': Assistant Professor
'318': Asset Allocation Analyst
'319': Research teacher’s assistant
'320': Mathematical Analyst
'321': IT Consultant
'322': Big Data Researcher
'323': IT Operations AWS Engineer
'324': Administrative Assistant
'325': Machine Learning Research Co-op
'326': It Intern
'327': Functional Analyst
'328': Graduate research assistant
'329': Assistant Developer
'330': Technical Maintenance Intern
'331': Senior Implementation Consultant
'332': System Development Manager
'333': Frontend React Architect
'334': Support Analyst
'335': Submarine ROV Supervisor
'336': Marketing & Sales Intern
'337': Strategic Planning Leader
'338': Business Intelligence Consultant and DBA
'339': BI and Dev. Team Lead
'340': Undergraduate Assistant
'341': Assistant Professor of Probability and Statistic I & II
'342': Technical Maintenance Clerk
'343': Associate Professor Of Discrete Event Simulation
'344': Supply Chain Analyst
'345': Infrastructure Mgmt Senior Analyst
'346': Data Analysis Researcher
'347': Marketing Intelligence Specialist
'348': Junior Programmer
'349': Data Platform Engineer
'350': CTO
'351': Senior System Administrator
'352': Data Administration and Security Control Analyst
'353': Research assistant
'354': Business Intelligence Leader
'355': UNIVERSIDAD EAFIT
'356': Risk Intern
'357': Applied Mathematics - Research Assistant
'358': IT Intern
'359': Head Of Systems
'360': Researcher and Developer
'361': Senior Full Stack Engineer
'362': Junior Project Manager
'363': Engineer Manager
'364': Technical Service Coordinator
'365': Software Support Engineer
'366': Research Analyst
'367': Latin America Insights And Strategy Intern
'368': Latin America Insights and Strategy Intern
'369': Frontend Intern
'370': Senior Infrastructure Administrator
'371': Process Analyst
'372': Product Manager
'373': Bioinformatician
'374': Qa Intern
'375': Product and Data Manager
'376': Finance Risk Intern
'377': Maintenance Planner
'378': Research Consultant
'379': Graduate Research Assistant
'380': It Assistant
'381': Research Staff Member
'382': Computational Biologist
'383': Machine Learning Consultant
'384': Engineering Tutor
'385': Programming Fundamentals Instructor
'386': Assistant Professor Industrial Engineering
'387': Engineer Intern
'388': Open Source Software Contributor
'389': Junior Business Intelligence Consultant
'390': Tutor of Statistics
'391': INNOVA Project Analyst II
'392': Analyst
'393': Junior Project Analyst
'394': Freelance developer
'395': C++ Agent Developer
'396': Data Research Analyst
'397': Application Developer
'398': IT Manager
'399': Pre-professional Paid Intern
'400': Senior Financial Risk Analyst
'401': Freelance Java Developer
'402': Junior Engineer
'403': Senior Logistics Analyst
'404': Factored
'405': Technology Team Leader
'406': Operations Analyst
'407': Statistics Intern
'408': Risk Management Supervisor/Analyst
'409': Security Developer
'410': Industrial Improvement Intern
'411': Data Intelligence Lead
'412': Jr Full Stack Scala Engineer
'413': Product Lead
'414': Academic Assistant
'415': Growth Operations Senior Analyst
'416': Mathematical Finance Researcher
'417': Scientific Consultant and Founder
'418': Solutions Engineer
'419': Senior DevOps Engineer
'420': Sales Promoter/Analyst/Marketing Intern
'421': Junior Lean Manufacturing Consultant
'422': Actuarial & Alternative Investments Specialist
'423': Product Specialist
'424': Engineering Assistant
'425': Quantitative Portfolio Manager
'426': Implementation Analyst
'427': Statistical Analyst
'428': Strategic Planning Analyst
'429': Machine Learning Professor
'430': Technology Strategy Intern
'431': Quality Coordinator
'432': Research Student
'433': BI Architect Professional
'434': Mid-Level Integration Engineer and Team Leader
'435': ETL Leader
'436': Database Analyst
'437': Automation Tester
'438': Master’s Course Designer
'439': Head Of Artificial Intelligence
'440': Mathematics Instructor Teacher
'441': Nanomagnetism Lab Graduate Research Assistant
'442': Mathematics TA
'443': Prodesp
'444': 'Teaching Assistant: Choice Theory'
'445': Teacher & Assistant
'446': Instructor Professor
'447': Microeconometrics Professor
'448': Algorithms Teaching Assistant
'449': Competitive Programming Tutor
'450': Innovation Analyst
'451': Adjunct Teacher
'452': Complementary Teacher
'453': Computer Vision Engineer
'454': Hadoop Developer
'455': Instructor - Discrete Event Simulation
'456': Online College Tutor
'457': Statistics Teaching Assistant
'458': University Subject Tutor
'459': Nanomagnetism Lab Undergrad Research Assistant
'460': Marketplace Sr Analyst LATAM
'461': Regional Credit Accounts Receivable Manager Assistant
'462': Graduate TA
'463': Scala Back-End Developer
'464': Operations Tech Lead
'465': Research Assistant For Rise Group
'466': Monitor
'467': Mechanical Designer
'468': <<Title02>>
'469': SAP Business Objects Consultant
'470': Bi Intern
'471': Senior Database Administrator
'472': Intelligent Automation Consultant
'473': Trainee - Commercial Analyst
'474': Trainee Commercial Analyst
'475': Graduate Professor
'476': Part-Time Lecturer
'477': Information Technology Intern
'478': Computer Architecture Tutor
'479': Power BI Instructor
'480': Code Reviewer
'481': Quality Analyst
'482': Golang Developer
'483': Senior Tutor
'484': Investments Analyst
'485': Trader
'486': Associate
'487': Research Assistant at Apolo Scientific Computing Center
'488': Business Development Operations Analyst
'489': SR Tech Lead & Project Leader
'490': Technical Consultant
'491': Junior Front end Developer
'492': Quality Analyst, Yuxi Global
'493': SR Technical Leader
'494': Economist
'495': Corporate Information Specialist
'496': IT Risk Consultant
'497': SAP BI Consultant
'498': Undergraduate Teaching Assistant
'499': Artificial Intelligence Developer
'500': Test Automation Engineer
'501': Portfolio Manager
'502': Credit Risk Senior Analyst
'503': Data Migration Professional
'504': Finance Intern
'505': Adjunct Laboratory Assistant
'506': Undergraduate Research Fellow
'507': Online Tutor
'508': Teacher Of Industrial Mechanics
'509': Artificial Lift Intern
'510': Master In Neurolinguistics Programming And Hypnotherapist
'511': Statistics Trainee
'512': Teacher / Lab Assistant
'513': Civil Engineer Intern
'514': Business Executive
'515': Data Intern
'516': PIBIC - CNPq
'517': Junior Data Modeler
'518': Solutions Architect
'519': Science Intern
'520': Information and Actuary Analyst
'521': Teacher assistant for Programming Fundamentals Course
'522': SR Technical & Project Leader
'523': Logistics Innovation Intern
'524': Oil Exploration Geologist
'525': NET Developer
'526': Processing Analyst
'527': Associate Engineer
'528': Electronics Engineer
'529': IT Course Contributor
'530': Embedded ARM developer
'531': <<Title03>>
'532': Marketing and Sales Assistant
'533': Trainee
'534': Jr Front-End Engineer
'535': Jr Back End Developer
'536': Programming Instructor
'537': Teacher (ad Honorem)
'538': HPE Solutions Architect
'539': Software System Engineer
'540': SQL Developer
'541': Technical Office Assistant Engineer
'542': Developer and Consultant
'543': Netcom
'544': Treasury Analyst
'545': <<Title01>>
'546': Geologo
'547': Senior Technical Team Lead
'548': Conversion Analyst
'549': SAS Consultant
'550': Computer Programmer
'551': SAP Senior Consultant
'552': Medea Interactiva
'553': Mid-Level Integration Engineer
'554': Apple Developer
'555': Growth Marketing Analyst
'556': Robotics And Control Systems Engineer
'557': Financial Risk Analyst Intern
- name: keywords
dtype: string
splits:
- name: train
num_bytes: 1704497.3801787165
num_examples: 2215
- name: val
num_bytes: 568678.8099106418
num_examples: 739
- name: test
num_bytes: 568678.8099106418
num_examples: 739
download_size: 877814
dataset_size: 2841855.0
---
# Dataset Card for "fr-crawler-private-mlm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ignmilton/ign_clean_instruct_dataset_500k | 2023-06-13T07:45:51.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"code",
"region:us"
] | ignmilton | null | null | null | 17 | 87 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- code
pretty_name: ign_500k
size_categories:
- 100K<n<1M
---
This dataset contains ~508k prompt-instruction pairs with high quality responses. It was synthetically created from a subset of Ultrachat prompts. It does not contain any alignment focused responses or NSFW content.
Licensed under apache-2.0 |
abacusai/LongChat-Lines | 2023-07-28T03:14:01.000Z | [
"region:us"
] | abacusai | null | null | null | 9 | 87 | ---
configs:
- config_name: default
data_files:
- split: '100'
path: data/100-*
- split: '150'
path: data/150-*
- split: '175'
path: data/175-*
- split: '200'
path: data/200-*
- split: '250'
path: data/250-*
- split: '300'
path: data/300-*
- split: '400'
path: data/400-*
- split: '500'
path: data/500-*
- split: '600'
path: data/600-*
- split: '680'
path: data/680-*
- split: '750'
path: data/750-*
- split: '850'
path: data/850-*
- split: '950'
path: data/950-*
- split: '1100'
path: data/1100-*
dataset_info:
features:
- name: expected_number
dtype: int64
- name: num_lines
dtype: int64
- name: token_size
dtype: int64
- name: prompt
dtype: string
splits:
- name: '100'
num_bytes: 275673
num_examples: 50
- name: '150'
num_bytes: 400446
num_examples: 50
- name: '175'
num_bytes: 463159
num_examples: 50
- name: '200'
num_bytes: 525856
num_examples: 50
- name: '250'
num_bytes: 650643
num_examples: 50
- name: '300'
num_bytes: 775800
num_examples: 50
- name: '400'
num_bytes: 1025288
num_examples: 50
- name: '500'
num_bytes: 1276039
num_examples: 50
- name: '600'
num_bytes: 1524627
num_examples: 50
- name: '680'
num_bytes: 1724325
num_examples: 50
- name: '750'
num_bytes: 1899422
num_examples: 50
- name: '850'
num_bytes: 2149220
num_examples: 50
- name: '950'
num_bytes: 2398398
num_examples: 50
- name: '1100'
num_bytes: 2772556
num_examples: 50
download_size: 7270406
dataset_size: 17861452
---
# Dataset Card for "LongChat-Lines"
This dataset is was used to evaluate the performance of model finetuned to operate on longer contexts it is based on
a task template proposed LMSys to evaluate attention to arbitrary points in the context. See the full details at
[https;//github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context). |
C-MTEB/AFQMC | 2023-07-28T13:39:01.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 87 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 378718
num_examples: 3861
- name: train
num_bytes: 3396503
num_examples: 34334
- name: validation
num_bytes: 426285
num_examples: 4316
download_size: 2337418
dataset_size: 4201506
---
# Dataset Card for "AFQMC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pankajmathur/lima_unchained_v1 | 2023-08-25T22:18:57.000Z | [
"region:us"
] | pankajmathur | null | null | null | 1 | 87 | An attempt to unchained Llama2 by using only 780 filtered dataset from [GAIR/lima](https://huggingface.co/datasets/GAIR/lima). |
heegyu/aulm-0809 | 2023-08-22T03:33:28.000Z | [
"region:us"
] | heegyu | null | null | null | 2 | 87 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 704591219
num_examples: 171404
download_size: 311285345
dataset_size: 704591219
---
공개 한국어 Instruction 데이터를 포멧을 통일하고 병합한 데이터
| Dataset | # instance | 타입 |
| --- | --- | --- |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | 싱글턴 |
| [dbdu/ShareGPT-74k-ko 의 part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | 멀티턴 |
| [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 |
| [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | 싱글턴 |
| [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | 싱글턴 |
| [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | 싱글턴, 멀티턴(극히 일부) |
| [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | 싱글턴 |
- KULLM v2 데이터셋에서는 GPT4ALL, Dolly 데이터만 추출해서 사용했습니다.
- 다양한 학습 데이터셋은 [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository를 참고하세요.
|
yzhuang/autotree_automl_100000_electricity_sgosdt_l256_dim7_d3_sd0 | 2023-09-07T21:52:00.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 87 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2057200000
num_examples: 100000
- name: validation
num_bytes: 205720000
num_examples: 10000
download_size: 578994225
dataset_size: 2262920000
---
# Dataset Card for "autotree_automl_100000_electricity_sgosdt_l256_dim7_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kewu93/three_styles_prompted_250_512x512 | 2023-09-21T23:53:43.000Z | [
"region:us"
] | kewu93 | null | null | null | 0 | 87 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: style_class
dtype: string
splits:
- name: train
num_bytes: 17235209.8
num_examples: 600
- name: val
num_bytes: 4420404.2
num_examples: 150
download_size: 21435960
dataset_size: 21655614.0
---
# Dataset Card for "three_styles_prompted_250_512x512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kili-technology/plastic_in_river | 2022-10-21T07:13:58.000Z | [
"task_categories:object-detection",
"size_categories:1K<n<10K",
"source_datasets:original",
"other-object-detection",
"region:us"
] | kili-technology | This dataset contains photos of rivers on which there may be waste. The waste items are annotated
through bounding boxes, and are assigned to one of the 4 following categories: plastic bottle, plastic bag,
another plastic waste, or non-plastic waste. Note that some photos may not contain any waste. | null | null | 13 | 86 | ---
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: Plastic in river
tags:
- other-object-detection
---
# Plastic in river
This dataset is an export of the annotated assets from the [Kili's Community Challenge - Plastic in River dataset](https://kili-technology.com/blog/kili-s-community-challenge-plastic-in-river-dataset).
The Hugging Face dataset will be updated every day during the challenge with the latest annotations. |
albertxu/CrosswordQA | 2022-10-29T23:45:36.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:unknown",
"region:us"
] | albertxu | null | null | null | 3 | 86 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for CrosswordQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/albertkx/Berkeley-Crossword-Solver
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Albert Xu](mailto:albertxu@usc.edu) and [Eshaan Pathak](mailto:eshaanpathak@berkeley.edu)
### Dataset Summary
The CrosswordQA dataset is a set of over 6 million clue-answer pairs scraped from the New York Times and many other crossword publishers. The dataset was created to train the Berkeley Crossword Solver's QA model. See our paper for more information. Answers are automatically segmented (e.g., BUZZLIGHTYEAR -> Buzz Lightyear), and thus may occasionally be segmented incorrectly.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
```
{
"id": 0,
"clue": "Clean-up target",
"answer": "mess"
}
```
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
roszcz/internship-midi-data-science | 2023-02-28T07:34:05.000Z | [
"region:us"
] | roszcz | null | null | null | 0 | 86 | ---
dataset_info:
features:
- name: notes
struct:
- name: end
sequence: float64
- name: pitch
sequence: int64
- name: start
sequence: float64
- name: velocity
sequence: int64
- name: control_changes
struct:
- name: number
sequence: int64
- name: time
sequence: float64
- name: value
sequence: int64
- name: user
dtype: string
- name: record_id
dtype: int64
splits:
- name: train
num_bytes: 23504548
num_examples: 6
download_size: 7471835
dataset_size: 23504548
---
# Dataset Card for "internship-midi-data-science"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Arabic-Clip/ImageCaptions-7M-Translations-Arabic | 2023-07-17T09:13:49.000Z | [
"region:us"
] | Arabic-Clip | null | null | null | 0 | 86 | Entry not found |
Arjun-G-Ravi/Python-codes | 2023-08-12T07:43:19.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"code",
"region:us"
] | Arjun-G-Ravi | null | null | null | 2 | 86 | ---
license: mit
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- code
pretty_name: Python codes dataset
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
Please note that this dataset maynot be perfect and may contain a very small quantity of non python codes. But the quantity appears to be very small
### Dataset Summary
The dataset contains a collection of python question and their code. This is meant to be used for training models to be efficient in Python specific coding.
The dataset has two features - 'question' and 'code'.
An example is:
```
{'question': 'Create a function that takes in a string and counts the number of vowels in it',
'code': 'def count_vowels(string):\n vowels = ["a", "e", "i", "o", "u"]\n count = 0\n for char in string:\n if char in vowels:\n count += 1\n return count'}
```
### Languages
English, Python
### Source Data
The dataset is derived from two other coding based datasets:
1) sahil2801/CodeAlpaca-20k
2) neulab/conala
@inproceedings{yin2018learning,
title={Learning to mine aligned code and natural language pairs from stack overflow},
author={Yin, Pengcheng and Deng, Bowen and Chen, Edgar and Vasilescu, Bogdan and Neubig, Graham},
booktitle={2018 IEEE/ACM 15th international conference on mining software repositories (MSR)},
pages={476--486},
year={2018},
organization={IEEE}
}
### Licensing Information
This uses MIT licence
### Citation Information
Will be added soon
|
yzhuang/autotree_pmlb_100000_banana_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T17:16:23.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 86 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 1545200000
num_examples: 100000
- name: validation
num_bytes: 154520000
num_examples: 10000
download_size: 281108655
dataset_size: 1699720000
---
# Dataset Card for "autotree_pmlb_100000_banana_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_100000_clean2_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T01:58:08.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 86 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 622819971
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_clean2_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_automl_100000_california_sgosdt_l256_dim8_d3_sd0 | 2023-09-08T03:18:22.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 86 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2159600000
num_examples: 100000
- name: validation
num_bytes: 215960000
num_examples: 10000
download_size: 848115506
dataset_size: 2375560000
---
# Dataset Card for "autotree_automl_100000_california_sgosdt_l256_dim8_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_100000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T06:27:42.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 86 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1008851704
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_100000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T06:58:08.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 86 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1014035692
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_100000_magic_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T16:34:51.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 86 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1062661836
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_magic_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_100000_twonorm_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T17:38:59.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 86 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 809891483
dataset_size: 2600840000
---
# Dataset Card for "autotree_pmlb_100000_twonorm_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hakanssonjesper/dataset-llama | 2023-10-01T16:39:18.000Z | [
"region:us"
] | hakanssonjesper | null | null | null | 0 | 86 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 65284594.45526487
num_examples: 45592
- name: validation
num_bytes: 16322580.544735134
num_examples: 11399
download_size: 38476271
dataset_size: 81607175.0
---
# Dataset Card for "dataset-llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cornell_movie_dialog | 2023-04-05T10:02:37.000Z | [
"language:en",
"region:us"
] | null | This corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts:
- 220,579 conversational exchanges between 10,292 pairs of movie characters
- involves 9,035 characters from 617 movies
- in total 304,713 utterances
- movie metadata included:
- genres
- release year
- IMDB rating
- number of IMDB votes
- IMDB rating
- character metadata included:
- gender (for 3,774 characters)
- position on movie credits (3,321 characters) | @InProceedings{Danescu-Niculescu-Mizil+Lee:11a,
author={Cristian Danescu-Niculescu-Mizil and Lillian Lee},
title={Chameleons in imagined conversations:
A new approach to understanding coordination of linguistic style in dialogs.},
booktitle={Proceedings of the
Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011},
year={2011}
} | null | 11 | 85 | ---
language:
- en
paperswithcode_id: cornell-movie-dialogs-corpus
pretty_name: Cornell Movie-Dialogs Corpus
dataset_info:
features:
- name: movieID
dtype: string
- name: movieTitle
dtype: string
- name: movieYear
dtype: string
- name: movieIMDBRating
dtype: string
- name: movieNoIMDBVotes
dtype: string
- name: movieGenres
sequence: string
- name: characterID1
dtype: string
- name: characterID2
dtype: string
- name: characterName1
dtype: string
- name: characterName2
dtype: string
- name: utterance
sequence:
- name: text
dtype: string
- name: LineID
dtype: string
splits:
- name: train
num_bytes: 19548840
num_examples: 83097
download_size: 9916637
dataset_size: 19548840
---
# Dataset Card for "cornell_movie_dialog"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html](http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 9.92 MB
- **Size of the generated dataset:** 19.55 MB
- **Total amount of disk used:** 29.46 MB
### Dataset Summary
This corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts:
- 220,579 conversational exchanges between 10,292 pairs of movie characters
- involves 9,035 characters from 617 movies
- in total 304,713 utterances
- movie metadata included:
- genres
- release year
- IMDB rating
- number of IMDB votes
- IMDB rating
- character metadata included:
- gender (for 3,774 characters)
- position on movie credits (3,321 characters)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 9.92 MB
- **Size of the generated dataset:** 19.55 MB
- **Total amount of disk used:** 29.46 MB
An example of 'train' looks as follows.
```
{
"characterID1": "u0 ",
"characterID2": " u2 ",
"characterName1": " m0 ",
"characterName2": " m0 ",
"movieGenres": ["comedy", "romance"],
"movieID": " m0 ",
"movieIMDBRating": " 6.90 ",
"movieNoIMDBVotes": " 62847 ",
"movieTitle": " f ",
"movieYear": " 1999 ",
"utterance": {
"LineID": ["L1"],
"text": ["L1 "]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `movieID`: a `string` feature.
- `movieTitle`: a `string` feature.
- `movieYear`: a `string` feature.
- `movieIMDBRating`: a `string` feature.
- `movieNoIMDBVotes`: a `string` feature.
- `movieGenres`: a `list` of `string` features.
- `characterID1`: a `string` feature.
- `characterID2`: a `string` feature.
- `characterName1`: a `string` feature.
- `characterName2`: a `string` feature.
- `utterance`: a dictionary feature containing:
- `text`: a `string` feature.
- `LineID`: a `string` feature.
### Data Splits
| name |train|
|-------|----:|
|default|83097|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{Danescu-Niculescu-Mizil+Lee:11a,
author={Cristian Danescu-Niculescu-Mizil and Lillian Lee},
title={Chameleons in imagined conversations:
A new approach to understanding coordination of linguistic style in dialogs.},
booktitle={Proceedings of the
Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011},
year={2011}
}
```
### Contributions
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
un_multi | 2023-06-01T14:59:54.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:ru",
"language:zh",
"license:unknown",
"region:us"
] | null | This is a collection of translated documents from the United Nations. This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language | @inproceedings{eisele-chen-2010-multiun,
title = "{M}ulti{UN}: A Multilingual Corpus from United Nation Documents",
author = "Eisele, Andreas and
Chen, Yu",
booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)",
month = may,
year = "2010",
address = "Valletta, Malta",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/686_Paper.pdf",
abstract = "This paper describes the acquisition, preparation and properties of a corpus extracted from the official documents of the United Nations (UN). This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language. We describe the methods we used for crawling, document formatting, and sentence alignment. This corpus also includes a common test set for machine translation. We present the results of a French-Chinese machine translation experiment performed on this corpus.",
}
@InProceedings{TIEDEMANN12.463,
author = {J�rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
} | null | 2 | 85 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- de
- en
- es
- fr
- ru
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: multiun
pretty_name: Multilingual Corpus from United Nation Documents
dataset_info:
- config_name: ar-de
features:
- name: translation
dtype:
translation:
languages:
- ar
- de
splits:
- name: train
num_bytes: 94466397
num_examples: 165090
download_size: 21869935
dataset_size: 94466397
- config_name: ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 4189852369
num_examples: 9759125
download_size: 1036296368
dataset_size: 4189852369
- config_name: ar-es
features:
- name: translation
dtype:
translation:
languages:
- ar
- es
splits:
- name: train
num_bytes: 4509675284
num_examples: 10119379
download_size: 1101206667
dataset_size: 4509675284
- config_name: ar-fr
features:
- name: translation
dtype:
translation:
languages:
- ar
- fr
splits:
- name: train
num_bytes: 4516850009
num_examples: 9929567
download_size: 1109705925
dataset_size: 4516850009
- config_name: ar-ru
features:
- name: translation
dtype:
translation:
languages:
- ar
- ru
splits:
- name: train
num_bytes: 5932866867
num_examples: 10206243
download_size: 1261123878
dataset_size: 5932866867
- config_name: ar-zh
features:
- name: translation
dtype:
translation:
languages:
- ar
- zh
splits:
- name: train
num_bytes: 3781658413
num_examples: 9832293
download_size: 1009696775
dataset_size: 3781658413
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 76684549
num_examples: 162981
download_size: 19468529
dataset_size: 76684549
- config_name: de-es
features:
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 80936653
num_examples: 162078
download_size: 20266591
dataset_size: 80936653
- config_name: de-fr
features:
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 81888435
num_examples: 164025
download_size: 20692837
dataset_size: 81888435
- config_name: de-ru
features:
- name: translation
dtype:
translation:
languages:
- de
- ru
splits:
- name: train
num_bytes: 111517934
num_examples: 164792
download_size: 23507789
dataset_size: 111517934
- config_name: de-zh
features:
- name: translation
dtype:
translation:
languages:
- de
- zh
splits:
- name: train
num_bytes: 70534818
num_examples: 176933
download_size: 19927209
dataset_size: 70534818
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 4128141663
num_examples: 11350967
download_size: 1123164180
dataset_size: 4128141663
- config_name: en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 4678055160
num_examples: 13172019
download_size: 1355002731
dataset_size: 4678055160
- config_name: en-ru
features:
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 5632662839
num_examples: 11654416
download_size: 1285801078
dataset_size: 5632662839
- config_name: en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 2960376046
num_examples: 9564315
download_size: 900076520
dataset_size: 2960376046
- config_name: es-fr
features:
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 4454712498
num_examples: 11441889
download_size: 1195733510
dataset_size: 4454712498
- config_name: es-ru
features:
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 5442655730
num_examples: 10605056
download_size: 1228045966
dataset_size: 5442655730
- config_name: es-zh
features:
- name: translation
dtype:
translation:
languages:
- es
- zh
splits:
- name: train
num_bytes: 3223871198
num_examples: 9847770
download_size: 953250084
dataset_size: 3223871198
- config_name: fr-ru
features:
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 5979879089
num_examples: 11761738
download_size: 1364307157
dataset_size: 5979879089
- config_name: fr-zh
features:
- name: translation
dtype:
translation:
languages:
- fr
- zh
splits:
- name: train
num_bytes: 3241098333
num_examples: 9690914
download_size: 962824881
dataset_size: 3241098333
- config_name: ru-zh
features:
- name: translation
dtype:
translation:
languages:
- ru
- zh
splits:
- name: train
num_bytes: 4233875537
num_examples: 9557007
download_size: 1037881127
dataset_size: 4233875537
config_names:
- ar-de
- ar-en
- ar-es
- ar-fr
- ar-ru
- ar-zh
- de-en
- de-es
- de-fr
- de-ru
- de-zh
- en-es
- en-fr
- en-ru
- en-zh
- es-fr
- es-ru
- es-zh
- fr-ru
- fr-zh
- ru-zh
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[MultiUN](http://www.euromatrixplus.net/multi-unp)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a collection of translated documents from the United Nations.
This corpus is available in all 6 official languages of the UN
consisting of around 300 million words per language
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{eisele-chen-2010-multiun,
title = "{M}ulti{UN}: A Multilingual Corpus from United Nation Documents",
author = "Eisele, Andreas and
Chen, Yu",
booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)",
month = may,
year = "2010",
address = "Valletta, Malta",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/686_Paper.pdf",
abstract = "This paper describes the acquisition, preparation and properties of a corpus extracted from the official documents of the United Nations (UN). This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language. We describe the methods we used for crawling, document formatting, and sentence alignment. This corpus also includes a common test set for machine translation. We present the results of a French-Chinese machine translation experiment performed on this corpus.",
}
```
```
@InProceedings{TIEDEMANN12.463,
author = {J�rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
SetFit/go_emotions | 2022-09-08T15:41:33.000Z | [
"region:us"
] | SetFit | null | null | null | 4 | 85 | # GoEmotions
This dataset is a port of the official [`go_emotions` dataset](https://huggingface.co/datasets/go_emotions) on the Hub. It only contains the `simplified` subset as these are the only fields we need for text classification. |
laion/laion-coco | 2022-10-23T18:55:09.000Z | [
"license:cc-by-4.0",
"region:us"
] | laion | null | null | null | 38 | 85 | ---
license: cc-by-4.0
---
# LAION COCO: 600M SYNTHETIC CAPTIONS FROM LAION2B-EN
by: Christoph Schuhmann, Andreas Köpf, Richard Vencu, Theo Coombes, Romain Beaumont, 10 Oct, 2022
Author: Christoph Schuhmann, Andreas Köpf , Theo Coombes, Richard Vencu, Benjamin Trom , Romain Beaumont
We present LAION-COCO, the world’s largest dataset of 600M generated high-quality captions for publicly available web-images
Laion5B has five billion natural captions. They provide a lot of information, but could synthetic captions complement them? To answer this question, we use a combination of existing, publicly available models to produce high quality captions for images in the style of MS COCO. We captioned 600M images from the english subset of Laion-5B with an ensemble of BLIP L/14 and 2 CLIP versions (L/14 and RN50x64).
This will make it possible to investigate the value of generated captions to train models. We’re curious on how these synthetic captions could impact models trained on them!
The 600M samples are provided in parquet files. Columns include the original caption, the url, the top caption and a list of alternative captions with lower CLIP-similarity scores.
## Method
The method we used to generate these captions was to:
- We use Blip L/14 to generate 40 captions
- Rank them using openai Clip Open AI L/14 ; selected the best 5 captions
- Rank using Open AI RN50x64 Clip model to select the best one
- Use a small, fine-tuned T0 model to roughly repair grammar and punctuation of the texts
The hyperparameters were chosen through a grid search (settings) by Andreas Köpf to best match the style ( ROUGE scores ) of MS COCO texts.
## Evaluation
We evaluated these generated captions by asking human evaluators to guess whether a caption is coming from a human or an AI model. We also asked them to rate the quality on a scale from 0(bad) to 5 (good).
In a first round we presented the evaluators each 200 samples, that contained 100 AI generated and 100 human written MS COCO captions.
## Observations
Mean rating & standard deviation of samples, that were written by a human:
Mean: 3.98
Stdev: 0.99
Mean rating & standard deviation of samples, that were written by an AI
Mean: 3.89
Stdev: 1.12
Mean rating & standard deviation of samples, where the annotator believed they were written by a human:
Mean: 4.44
Stdev: 0.61
Mean rating & standard deviation of samples, where the annotator believed they were generated by an AI
Mean: 3.50
Stdev: 1.15
## Interpretation
It is very interesting that the mean scores of the samples generated by humans and generated by the model are very similar. We also notice that the standard deviation of the generated captions is a little bit higher.
We hypothesize that most in most cases the quality of the generated captions is perceived as as good as the quality of the human written captions.
But sometimes the captioning model obviously fails and the quality of the results is pretty low because the model doesn't relevant understand concepts about what is going on in the picture, because it's knowledge is not grounded in a sufficiently sophisticated world model.
## Failure cases
“Two people posing for the camera in their wedding attire, one with an umbrella over his head and another with long red hair.”
“An older man having a heart attack, with his hand on the chest.”
When we remove all samples from the evaluations that have ratings of either 0 or 1, we Observe that the mean ratings and standard deviations move closer together.
Scores without ratings of 0 and 1
Mean rating & standard deviation of samples, that were written by a human:
Mean: 4.07
Stdev: 0.81
Mean rating & standard deviation of samples, that were written by an AI
Mean: 4.02
Stdev: 0.94
The mean ratings of the generated captions are still a little bit lower and the standard deviation is still a little bit higher, but the trend is pretty clear. By removing samples with rating 2, the gap between the qualities would probably decrease even further.
Presentation only generated captions:
In a next step, we presented the human evaluators 400 captions that were only generated by the model (no human written captions in between):
Mean rating of all samples
3.81
Standard deviation of all samples
0.94
% rated as human
47.5
% rated as AI
52.5
We observe that the human evaluators thought in 47.5% of all cases, that the captions were written by a human. This makes us confident that our captains are on average pretty good. When we told the evaluators later that all captions were generated by the model they told us that it was very hard for them to judge whether a caption was written by a model or a human, and that it only was easy for them in obvious failure cases.
## Conclusions
We conclude that Our ensemble of BLIP and CLIP is already pretty good and capable of generating captions with a quality that is on average pretty close to the human written captions of MS Coco.
It would be very interesting for future work to let people rate our generated captions at larger scale and then filter out the samples with low rating values. These results could be used to train models to rate the quality of captions and to predict whether a caption looks like a generated or a human written caption.
And even without further automated filtering, an ensemble of our captions and human evaluators would be a pretty good workflow to curate high quality captions at much lower costs than if we would ask humans to write them from scratch.
## Credit assignments
- Christoph Schuhmann lead the project, implemented a first version of the code, ran most of the generations & conducted the human evaluations
- Andreas Köpf conducted the hyperparameter search & wrote the code to execute BLIP + CLIP filtering at scale
- Theo Coombes managed the server that coordinated which GPU worker got which part of LAION to work on
- Romain Beaumont packaged the .json into parquet files, sent to HF and wrote the first draft of this post
- Richard Vencu provided the infra structure to use the idle compute for this project
- Benjamin Trom wrote code that help us to convert the .json files to parquet
We thank stability.ai for providing the compute used to generate the captions in the dataset. |
LLukas22/NLQuAD | 2022-12-23T13:04:58.000Z | [
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-3.0",
"region:us"
] | LLukas22 | null | null | null | 1 | 85 | ---
pretty_name: NLQuAD
language:
- en
license:
- cc-by-3.0
size_categories:
- 10K<n<100K
multilinguality:
- monolingual
task_ids:
- extractive-qa
dataset_info:
features:
- name: title
dtype: string
- name: date
dtype: string
- name: paragraphs
list:
- name: context
dtype: string
- name: qas
list:
- name: answers
list:
- name: answer_end
dtype: int64
- name: answer_start
dtype: int64
- name: text
dtype: string
- name: id
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 72036724
num_examples: 10259
- name: test
num_bytes: 9045482
num_examples: 1280
- name: validation
num_bytes: 8876137
num_examples: 1280
download_size: 0
dataset_size: 89958343
---
# Dataset Card for "NLQuAD"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/ASoleimaniB/NLQuAD](https://github.com/ASoleimaniB/NLQuAD)
- **Paper: https://aclanthology.org/2021.eacl-main.106/**
- **Size of the generated dataset:** 89.95 MB
### Dataset Summary
This is a copy of the original NLQuAD dataset distributed via [Github](https://github.com/ASoleimaniB/NLQuAD).
NLQuAD is a non-factoid long question answering dataset from BBC news articles.
NLQuAD’s question types and the long length of its context documents as well as answers, make it a challenging real-world task.
NLQuAD consists of news articles as context documents, interrogative sub-headings in the articles as questions, and body paragraphs corresponding to the sub-headings as contiguous answers to the questions.
NLQuAD contains 31k non-factoid questions and long answers collected from 13k BBC news articles.
See example articles in BBC [1](https://www.bbc.com/news/world-asia-china-51230011), [2](https://www.bbc.com/news/world-55709428).
We automatically extract target answers because annotating for non-factoid long QA is extremely challenging and costly.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"title": "Khashoggi murder: Body 'dissolved in acid'",
"date": "2 November 2018",
"paragraphs":[
{
"context": "A top Turkish official, presidential adviser Yasin Aktay, has said ....",
"qas":[
{
"question":"What was said in the crown prince's alleged phone call?",
"id":"0_0",
"answers":[
{
"text":"During the call with President Donald Trump\'s son-in-law Jared Kushner and national ....",
"answer_start":1352,
"answer_end": 2108,
}
]
},
{
"question":"What has the investigation found so far?",
"id":"0_1",
"answers":[
{
"text":"There is still no consensus on how Khashoggi died. He entered ....",
"answer_start":2109,
"answer_end": 3128,
}
]
},
]
}
]
}
```
### Data Fields
The data fields are the same among all splits.
- `title`: a `string` feature.
- `date`: a `string` feature.
- `paragraphs`: a list feature containing dictionaries:
- `context`: a `string` feature.
- `qas`: a list feature containing dictionaries:
- `question`: a `string` feature.
- `id`: a `string` feature.
- `answers`: a list feature containing dictionaries:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `answer_end`: a `int32` feature
### Data Splits
| name |train|test|validation|
|----------|----:|----:|---------:|
| |10259| 1280| 1280|
## Additional Information
### Licensing Information
This dataset is distributed under the [CC BY-NC](https://creativecommons.org/licenses/by-nc/3.0/) licence providing free access for non-commercial and academic usage.
### Citation Information
BibTeX:
```json
@inproceedings{soleimani-etal-2021-nlquad,
title = "{NLQ}u{AD}: A Non-Factoid Long Question Answering Data Set",
author = "Soleimani, Amir and
Monz, Christof and
Worring, Marcel",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-main.106",
doi = "10.18653/v1/2021.eacl-main.106",
pages = "1245--1255",
abstract = "We introduce NLQuAD, the first data set with baseline methods for non-factoid long question answering, a task requiring document-level language understanding. In contrast to existing span detection question answering data sets, NLQuAD has non-factoid questions that are not answerable by a short span of text and demanding multiple-sentence descriptive answers and opinions. We show the limitation of the F1 score for evaluation of long answers and introduce Intersection over Union (IoU), which measures position-sensitive overlap between the predicted and the target answer spans. To establish baseline performances, we compare BERT, RoBERTa, and Longformer models. Experimental results and human evaluations show that Longformer outperforms the other architectures, but results are still far behind a human upper bound, leaving substantial room for improvements. NLQuAD{'}s samples exceed the input limitation of most pre-trained Transformer-based models, encouraging future research on long sequence language models.",
}
``` |
ai4bharat/naamapadam | 2023-05-24T17:09:03.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc0-1.0",
"arxiv:2212.10168",
"region:us"
] | ai4bharat | \ | \ | null | 2 | 85 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: naamapadam
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for naamapadam
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/AI4Bharat/indicner
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Anoop Kunchukuttan
### Dataset Summary
Naamapadam is the largest publicly available Named Entity Annotated dataset for 11 Indic languages. This corpora was created by projecting named entities from English side to the Indic language side of the English-Indic languages parallel corpus. The dataset additionally contains manually labelled test set for 8 Indic languages containing 500-1000 sentences.
### Supported Tasks and Leaderboards
**Tasks:** NER on Indian languages.
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
{'words': ['उन्हेनें', 'शिकांगों','में','बोरोडिन','की','पत्नी','को','तथा','वाशिंगटन','में','रूसी','व्यापार','संघ','को','पैसे','भेजे','।'],
'ner': [0, 3, 0, 1, 0, 0, 0, 0, 3, 0, 5, 6, 6, 0, 0, 0, 0],
}
### Data Fields
- `words`: Raw tokens in the dataset.
- `ner`: the NER tags for this dataset.
### Data Splits
(to be updated, see paper for correct numbers)
| Language | Train | Validation | Test |
|---:|---:|---:|---:|
| as | 10266 | 52 | 51 |
| bn | 961679 | 4859 | 607 |
| gu | 472845 | 2389 | 50 |
| hi | 985787 | 13460 | 437 |
| kn | 471763 | 2381 | 1019 |
| ml | 716652 | 3618 | 974 |
| mr | 455248 | 2300 | 1080 |
| or | 196793 | 993 | 994 |
| pa | 463534 | 2340 | 2342 |
| ta | 497882 | 2795 | 49 |
| te | 507741 | 2700 | 53 |
## Usage
You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
```code
pip install datasets
```
To use the dataset, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('ai4bharat/naamapadam')
```
## Dataset Creation
We use the parallel corpus from the Samanantar Dataset between English and the 11 major Indian languages to create the NER dataset. We annotate the English portion of the parallel corpus with existing state-of-the-art NER model. We use word-level alignments learned from the parallel corpus to project the entity labels from English to the Indian language.
### Curation Rationale
naamapadam was built from [Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/). This dataset was built for the task of Named Entity Recognition in Indic languages. The dataset was introduced to introduce new resources to the Indic languages language that was under-served for Natural Language Processing.
### Source Data
[Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/)
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
NER annotations were done following the CoNLL-2003 guidelines.
#### Who are the annotators?
The annotations for the testset have been done by volunteers who are proficient in the respective languages. We would like to thank all the volunteers:
- Anil Mhaske
- Anoop Kunchukuttan
- Archana Mhaske
- Arnav Mhaske
- Gowtham Ramesh
- Harshit Kedia
- Nitin Kedia
- Rudramurthy V
- Sangeeta Rajagopal
- Sumanth Doddapaneni
- Vindhya DS
- Yash Madhani
- Kabir Ahuja
- Shallu Rani
- Armin Virk
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to provide a large-scale Named Entity Recognition dataset for Indic languages. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
<!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" />
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/>
</a>
<br/> -->
**CC0 License Statement**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/>
</a>
<br>
<br>
- We do not own any of the text from which this data has been extracted.
- We license the actual packaging of the mined data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0).
- To the extent possible under law, <a rel="dct:publisher" href="https://ai4bharat.iitm.ac.in/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Naamapadam</span> manually collected data and existing sources.
- This work is published from: India.
### Citation Information
If you are using the Naampadam corpus, please cite the following article:
```
@misc{mhaske2022naamapadam,
doi = {10.48550/ARXIV.2212.10168},
url = {https://arxiv.org/abs/2212.10168},
author = {Mhaske, Arnav and Kedia, Harshit and Doddapaneni, Sumanth and Khapra, Mitesh M. and Kumar, Pratyush and Murthy, Rudra and Kunchukuttan, Anoop},
title = {Naamapadam: A Large-Scale Named Entity Annotated Data for Indic Languages}
publisher = {arXiv},
year = {2022},
}
```
<!-- Contributors -->
### Contributors
- Arnav Mhaske <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Harshit Kedia <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Sumanth Doddapaneni <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Mitesh M. Khapra <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
- Pratyush Kumar <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
- Rudra Murthy <sub> ([AI4Bharat](https://ai4bharat.org), [IBM](https://www.ibm.com))</sub>
- Anoop Kunchukuttan <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
This work is the outcome of a volunteer effort as part of the [AI4Bharat initiative](https://ai4bharat.iitm.ac.in).
<!-- Contact -->
### Contact
- Anoop Kunchukuttan ([anoop.kunchukuttan@gmail.com](mailto:anoop.kunchukuttan@gmail.com))
- Rudra Murthy V ([rmurthyv@in.ibm.com](mailto:rmurthyv@in.ibm.com)) |
changpt/ko-lima-vicuna | 2023-06-14T07:47:51.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:ko",
"license:cc-by-2.0",
"KoLima",
"region:us"
] | changpt | null | null | null | 16 | 85 | ---
license: cc-by-2.0
task_categories:
- text-generation
language:
- ko
size_categories:
- n<1K
pretty_name: KoLima(vicuna)
tags:
- KoLima
---
# Ko Lima Vicuna Dataset
GPT4 API를 사용하여 [lima_vicuna_format 데이터](https://huggingface.co/datasets/64bits/lima_vicuna_format)를 한국어로 재생성한 데이터셋입니다.
GPT4 사용시 프롬프트는 "단순 번역이 아닌, 원문 내용을 참조하여 데이터를 재생성하라"는 내용으로 작성하였습니다.
원본 데이터셋 1,030건 중 866건의 데이터가 재생성되었습니다.
*데이터셋 제작은 웹 크롤링 전문 기업, **[해시스크래퍼(HASHSCRAPER)](https://www.hashscraper.com/)** 에서 진행되었습니다.*
*This is the lima_vicuna_format data regenerated into Korean using the GPT4 API.*
*When using GPT4, the prompt was written to read "Regenerate the data by referring to the original content, not just a simple translation".*
*Out of the 1,030 original datasets, 866 data points were regenerated.*
*The dataset was created by [HASHSCRAPER](https://www.hashscraper.com/), a company specializing in web scraping.*
---
# How to use
```python
>>> from datasets import load_dataset
>>> data = load_dataset("changpt/ko-lima-vicuna", data_files="ko_lima_vicuna.json", split="train")
>>> data
Dataset({
features: ['id', 'conversations'],
num_rows: 866
})
```
```python
>>> data[0]
{'id': '7-stackexchange',
'conversations': [{'from': 'human',
'value': '나는 ```ffmpeg```로 비디오를 GIF 파일로 변환하고 있어:\n```ffmpeg \\\n -i input.flv \\\n -ss 00:00:00.000 \\\n -pix_fmt rgb24 \\\n -r 10 \\\n -s 320x240 \\\n -t 00:00:10.000 \\\n output.gif\n```\n잘 작동하는데, 출력된 gif 파일의 품질이 너무 낮아. 변환된 gif의 품질을 어떻게 개선할 수 있을까?'},
{'from': 'gpt',
'value': '중간 이미지 파일을 피하고 싶다면, LordNeckBeard가 제공한 명령어를 ```ffmpeg```와 ImageMagick의 ```convert``` 사이에 파이프로 연결할 수 있습니다. 이렇게 하면 중간 파일이 필요하지 않습니다:\n```ffmpeg -i input.flv -vf scale=320:-1 -r 10 -f image2pipe -vcodec ppm - | convert -delay 10 -loop 0 - output.gif\n```\n```-f image2pipe```는 ffmpeg에게 비디오를 이미지로 분할하고 파이프 출력에 적합하게 만들라고 지시하며, ```-vcodec ppm```은 출력 형식을 ppm으로 지정합니다(png 형식인 경우, ```convert```가 파이프에서 모든 이미지를 읽지 않거나 ffmpeg가 모두 출력하지 않는 것 같습니다). 두 명령어의 ```-```는 각각 출력과 입력에 파이프를 사용하도록 지정합니다.\n파일을 저장하지 않고 결과를 최적화하려면, ```convert```의 출력을 두 번째 ```convert``` 명령어로 파이프할 수 있습니다:\n```ffmpeg -i input.flv -vf scale=320:-1 -r 10 -f image2pipe -vcodec ppm - | convert -delay 10 -loop 0 - gif:- | convert -layers Optimize - output.gif\n```\n```gif:-```는 ```convert```에게 출력을 gif 형식의 데이터로 파이프하도록 지시하고, ```-layers Optimize```는 두 번째 ```convert```에게 ```optimize-frame``` 및 ```optimize-transparancy``` 방법을 수행하도록 지시합니다(ImageMagick 애니메이션 최적화 소개 참조). ```-layers Optimize```의 출력이 항상 더 작은 파일 크기를 제공하지는 않으므로, 먼저 최적화 없이 gif로 변환해 보는 것이 좋습니다.\n이 과정에서 모든 것이 메모리에 있으므로 이미지가 매우 큰 경우 충분한 메모리가 필요할 수 있습니다.'}]}
```
---
# License
[CC BY 2.0 KR](https://creativecommons.org/licenses/by/2.0/kr/)
[Open AI](https://openai.com/policies/terms-of-use) |
sharmaarushi17/HPCPerfOpt-MCQA | 2023-09-22T05:30:21.000Z | [
"license:cc",
"region:us"
] | sharmaarushi17 | null | null | null | 0 | 85 | ---
license: cc
pretty_name: HPCPerfOpt (HPC Performance Optimization Benchmark) # Example: SQuAD
# configs:
# - mcq-single
# - mcq-multiple
# - rodinia-chatgpt-mcq
# dataset_info:
# # features:
# # - name: {feature_name_0} # Example: id
# # dtype: {feature_dtype_0} # Example: int32
# # - name: {feature_name_1} # Example: text
# # dtype: {feature_dtype_1} # Example: string
# # - name: {feature_name_2} # Example: image
# # dtype: {feature_dtype_2} # Example: image
# # Example for SQuAD:
# # - name: id
# # dtype: string
# # - name: title
# # dtype: string
# # - name: context
# # dtype: string
# # - name: question
# # dtype: string
# # - name: answers
# # sequence:
# # - name: text
# # dtype: string
# # - name: answer_start
# # dtype: int32
# config_name: mcq-single # Example for glue: sst2
# splits:
# - name: test # Example: train
# # num_bytes: {split_num_bytes_0} # Example for SQuAD: 79317110
# # num_examples: {split_num_examples_0} # Example for SQuAD: 87599
# # download_size: {dataset_download_size} # Example for SQuAD: 35142551
# # dataset_size: {dataset_size} # Example for SQuAD: 89789763
# # - config_name: mcq-multiple
# # data_files:
# # - split: test
# # path: "mcq-multiple.csv"
# # - config_name: rodinia-chatgpt
# # data_files:
# # - split: test
# # path: "rodinia-chatgpt-mcq.csv"
# task_categories:
# - question-answering
# tags:
# - code
# size_categories:
# - n<1K
---
This dataset contains Multiple Choice question-answer pairs.
There are 3 test files separated on the basis of how they were created:
test1.csv manual data collection from tutorials, etc
test2.csv scraped profiling tool Codee documentation
test3.csv ChatGPT-generated-MCQ (need to update format and randomize answers.) |
izumi-lab/wikipedia-ja-20230720 | 2023-07-29T03:05:36.000Z | [
"language:ja",
"license:cc-by-sa-3.0",
"region:us"
] | izumi-lab | null | null | null | 2 | 85 | ---
dataset_info:
features:
- name: curid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3653518687
num_examples: 1362415
download_size: 2130533065
dataset_size: 3653518687
license: cc-by-sa-3.0
language:
- ja
---
# Dataset Card for "wikipedia-ja-20230720"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vlsp-2023-vllm/grade_12_exams | 2023-09-30T08:28:29.000Z | [
"region:us"
] | vlsp-2023-vllm | null | null | null | 0 | 85 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: metadata
struct:
- name: grade
dtype: int64
- name: language
dtype: string
- name: subject
dtype: string
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 921887
num_examples: 1955
- name: validation
num_bytes: 224168
num_examples: 488
download_size: 461705
dataset_size: 1146055
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "grade_12_exams"
Reference: https://huggingface.co/datasets/exams |
serbog/all_job_listings_cleaned | 2023-09-14T23:53:19.000Z | [
"region:us"
] | serbog | null | null | null | 0 | 85 | ---
dataset_info:
features:
- name: job_id
dtype: string
- name: description
dtype: string
- name: title
dtype: string
- name: creationdate
dtype: int64
- name: name
dtype: string
- name: location_codes
sequence: string
splits:
- name: train
num_bytes: 2991442359
num_examples: 1687761
download_size: 1434224117
dataset_size: 2991442359
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "all_job_listings_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tim9510019/llama2_QA_Economics_230915 | 2023-10-10T01:30:51.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:mit",
"finance",
"region:us"
] | tim9510019 | null | null | null | 2 | 85 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: input
dtype: string
- name: Answer
dtype: string
- name: Source
dtype: int64
- name: Date
dtype: timestamp[ns]
- name: Type
dtype: int64
- name: Prompt
dtype: int64
- name: QuestionTokenNum
dtype: int64
- name: inputTokenNum
dtype: int64
- name: AnswerTokenNum
dtype: int64
splits:
- name: train
num_bytes: 1269385
num_examples: 288
download_size: 405947
dataset_size: 1269385
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- finance
---
# Dataset Card for "llama2_QA_Economics_230915"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jimregan/clarinpl_sejmsenat | 2023-01-22T13:37:24.000Z | [
"task_categories:other",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:other",
"region:us"
] | jimregan | A collection of 97 hours of parliamentary speeches published on the ClarinPL website
Note that in order to limit the required storage for preparing this dataset, the audio
is stored in the .wav format and is not converted to a float32 array. To convert the audio
file to a float32 array, please make use of the `.map()` function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | @article{marasek2014system,
title={System for automatic transcription of sessions of the {P}olish {S}enate},
author={Marasek, Krzysztof and Kor{\v{z}}inek, Danijel and Brocki, {\L}ukasz},
journal={Archives of Acoustics},
volume={39},
number={4},
pages={501--509},
year={2014}
} | null | 1 | 84 | ---
annotations_creators:
- expert-generated
language:
- pl
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- other
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for ClarinPL Sejm/Senat Speech Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CLARIN-PL mowa](https://mowa.clarin-pl.eu/)
- **Repository:** [Needs More Information]
- **Paper:** [System for Automatic Transcription of Sessions of the Polish Senate](https://acoustics.ippt.pan.pl/index.php/aa/article/view/327/pdf_32)
- **Leaderboard:** [Paperswithcode Leaderboard][Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
A collection of 97 hours of parliamentary speeches published on the ClarinPL website.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Polish.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`.
An example from the dataset is:
```
{'file': '/root/.cache/huggingface/datasets/downloads/extracted/4143b1d75559b10028c1c7e8800c9ccc05934ca5a8ea15f8f9a92770576a1ee3/SejmSenat/audio/AdamAbramowicz-20130410/file000.wav',
'id': 'AdamAbramowicz-20130410-file000',
'speaker_id': 'AdamAbramowicz',
'text': 'panie marszałku wysoka izbo panie ministrze próbuje się przedstawiać polskę jako zieloną wyspę kraj który się szybko rozwija tymczasem rzeczywistość jest zupełnie inna a widać ją także dzisiaj przed polskim parlamentem próbuje się rząd próbuje zagonić polaków do pracy aż do śmierci przedłużać wiek emerytalny czyliczyli sytuacja gospodarcza polski w tym wypadku jest przedstawiana już zupełnie inaczej pakiet klimatyczny i protokół z kioto jak się zgadzają fachowcy od gospodarki jest szkodliwy dla krajów które są na dorobku a polska właśnie jest takim krajem'}
```
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- text: the transcription of the audio file.
- speaker_id: The ID of the speaker of the audio.
### Data Splits
| | Train | Test |
| ----- | ----- | ---- |
| dataset | 6622 | 130 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
### Contributions
[Needs More Information] |
asapp/slue | 2022-09-26T23:08:10.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:sentiment-analysis",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"license:cc-by-4.0",
"arxiv:2111.10367",
"region:us"
] | asapp | Spoken Language Understanding Evaluation (SLUE) benchmark. There are two subsets: (i) SLUE-VoxPopuli which has ASR and NER tasks and (ii) SLUE-VoxCeleb which has ASR and SA tasks. | @inproceedings{shon2022slue,
title={Slue: New benchmark tasks for spoken language understanding evaluation on natural speech},
author={Shon, Suwon and Pasad, Ankita and Wu, Felix and Brusco, Pablo and Artzi, Yoav and Livescu, Karen and Han, Kyu J},
booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7927--7931},
year={2022},
organization={IEEE}
} | null | 3 | 84 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- cc0-1.0
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: slue
pretty_name: SLUE (Spoken Language Understanding Evaluation benchmark)
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- automatic-speech-recognition
- audio-classification
- text-classification
- token-classification
task_ids:
- sentiment-analysis
- named-entity-recognition
configs:
- voxpopuli
- voxceleb
---
# Dataset Card for SLUE
## Table of Contents
- [Dataset Card for SLUE](#dataset-card-for-slue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Automatic Speech Recognition (ASR)](#automatic-speech-recognition-asr)
- [Named Entity Recognition (NER)](#named-entity-recognition-ner)
- [Sentiment Analysis (SA)](#sentiment-analysis-sa)
- [How-to-submit for your test set evaluation](#how-to-submit-for-your-test-set-evaluation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [voxpopuli](#voxpopuli)
- [voxceleb](#voxceleb)
- [Data Fields](#data-fields)
- [voxpopuli](#voxpopuli-1)
- [voxceleb](#voxceleb-1)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [SLUE-VoxPopuli Dataset](#slue-voxpopuli-dataset)
- [SLUE-VoxCeleb Dataset](#slue-voxceleb-dataset)
- [Original License of OXFORD VGG VoxCeleb Dataset](#original-license-of-oxford-vgg-voxceleb-dataset)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://asappresearch.github.io/slue-toolkit](https://asappresearch.github.io/slue-toolkit)
- **Repository:** [https://github.com/asappresearch/slue-toolkit/](https://github.com/asappresearch/slue-toolkit/)
- **Paper:** [https://arxiv.org/pdf/2111.10367.pdf](https://arxiv.org/pdf/2111.10367.pdf)
- **Leaderboard:** [https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html](https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html)
- **Size of downloaded dataset files:** 1.95 GB
- **Size of the generated dataset:** 9.59 MB
- **Total amount of disk used:** 1.95 GB
### Dataset Summary
We introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to
- Track research progress on multiple SLU tasks
- Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks
- Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use.
For this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to [Toolkit](https://github.com/asappresearch/slue-toolkit) and [Paper](https://arxiv.org/pdf/2111.10367.pdf) for more details.
### Supported Tasks and Leaderboards
#### Automatic Speech Recognition (ASR)
Although this is not a SLU task, ASR can help analyze the performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER).
#### Named Entity Recognition (NER)
Named entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1.
#### Sentiment Analysis (SA)
Sentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores.[More Information Needed]
#### How-to-submit for your test set evaluation
See here https://asappresearch.github.io/slue-toolkit/how-to-submit.html
### Languages
The language data in SLUE is in English.
## Dataset Structure
### Data Instances
#### voxpopuli
- **Size of downloaded dataset files:** 398.45 MB
- **Size of the generated dataset:** 5.81 MB
- **Total amount of disk used:** 404.26 MB
An example of 'train' looks as follows.
```
{'id': '20131007-0900-PLENARY-19-en_20131007-21:26:04_3',
'audio': {'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/e35757b0971ac7ff5e2fcdc301bba0364857044be55481656e2ade6f7e1fd372/slue-voxpopuli/fine-tune/20131007-0900-PLENARY-19-en_20131007-21:26:04_3.ogg',
'array': array([ 0.00132601, 0.00058881, -0.00052187, ..., 0.06857217,
0.07835515, 0.07845446], dtype=float32),
'sampling_rate': 16000},
'speaker_id': 'None',
'normalized_text': 'two thousand and twelve for instance the new brussels i regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in europe even if the employer is domiciled outside europe. the commission will',
'raw_text': '2012. For instance, the new Brussels I Regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in Europe, even if the employer is domiciled outside Europe. The Commission will',
'raw_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'],
'start': [227, 177, 28, 0],
'length': [6, 6, 21, 4]},
'normalized_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'],
'start': [243, 194, 45, 0],
'length': [6, 6, 21, 23]},
'raw_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'],
'start': [227, 177, 28, 0],
'length': [6, 6, 21, 4]},
'normalized_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'],
'start': [243, 194, 45, 0],
'length': [6, 6, 21, 23]}}
```
#### voxceleb
- **Size of downloaded dataset files:** 1.55 GB
- **Size of the generated dataset:** 3.78 MB
- **Total amount of disk used:** 1.55 GB
An example of 'train' looks as follows.
```
{'id': 'id10059_229vKIGbxrI_00004',
'audio': {'path': '/Users/felixwu/.cache/huggingface/datasets/downloads/extracted/400facb6d2f2496ebcd58a5ffe5fbf2798f363d1b719b888d28a29b872751626/slue-voxceleb/fine-tune_raw/id10059_229vKIGbxrI_00004.flac',
'array': array([-0.00442505, -0.00204468, 0.00628662, ..., 0.00158691,
0.00100708, 0.00033569], dtype=float32),
'sampling_rate': 16000},
'speaker_id': 'id10059',
'normalized_text': 'of god what is a creator the almighty that uh',
'sentiment': 'Neutral',
'start_second': 0.45,
'end_second': 4.52}
```
### Data Fields
#### voxpopuli
- `id`: a `string` id of an instance.
- `audio`: audio feature of the raw audio. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `speaker_id`: a `string` of the speaker id.
- `raw_text`: a `string` feature that contains the raw transcription of the audio.
- `normalized_text`: a `string` feature that contains the normalized transcription of the audio which is **used in the standard evaluation**.
- `raw_ner`: the NER annotation of the `raw_text` using the same 18 NER classes as OntoNotes.
- `normalized_ner`: the NER annotation of the `normalized_text` using the same 18 NER classes as OntoNotes.
- `raw_combined_ner`: the NER annotation of the `raw_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`).
- `normalized_combined_ner`: the NER annotation of the `normalized_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`) which is **used in the standard evaluation**.
Each NER annotation is a dictionary containing three lists: `type`, `start`, and `length`. `type` is a list of the NER tag types. `start` is a list of the start character position of each named entity in the corresponding text. `length` is a list of the number of characters of each named entity.
#### voxceleb
- `id`: a `string` id of an instance.
- `audio`: audio feature of the raw audio. Please use `start_second` and `end_second` to crop the transcribed segment. For example, `dataset[0]["audio"]["array"][int(dataset[0]["start_second"] * dataset[0]["audio"]["sample_rate"]):int(dataset[0]["end_second"] * dataset[0]["audio"]["sample_rate"])]`. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `speaker_id`: a `string` of the speaker id.
- `normalized_text`: a `string` feature that contains the transcription of the audio segment.
- `sentiment`: a `string` feature which can be `Negative`, `Neutral`, or `Positive`.
- `start_second`: a `float` feature that specifies the start second of the audio segment.
- `end_second`: a `float` feature that specifies the end second of the audio segment.
### Data Splits
| |train|validation|test|
|---------|----:|---------:|---:|
|voxpopuli| 5000| 1753|1842|
|voxceleb | 5777| 1454|3553|
Here we use the standard split names in Huggingface's datasets, so the `train` and `validation` splits are the original `fine-tune` and `dev` splits of SLUE datasets, respectively.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
#### SLUE-VoxPopuli Dataset
SLUE-VoxPopuli dataset contains a subset of VoxPopuli dataset and the copyright of this subset remains the same with the original license, CC0. See also European Parliament's legal notice (https://www.europarl.europa.eu/legal-notice/en/)
Additionally, we provide named entity annotation (normalized_ner and raw_ner column in .tsv files) and it is covered with the same license as CC0.
#### SLUE-VoxCeleb Dataset
SLUE-VoxCeleb Dataset contains a subset of OXFORD VoxCeleb dataset and the copyright of this subset remains the same Creative Commons Attribution 4.0 International license as below. Additionally, we provide transcription, sentiment annotation and timestamp (start, end) that follows the same license to OXFORD VoxCeleb dataset.
##### Original License of OXFORD VGG VoxCeleb Dataset
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
VoxCeleb2 contains over a million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube.
The speakers span a wide range of different ethnicities, accents, professions and ages.
We provide Youtube URLs, associated face detections, and timestamps, as
well as cropped audio segments and cropped face videos from the
dataset. The copyright of both the original and cropped versions
of the videos remains with the original owners.
The data is covered under a Creative Commons
Attribution 4.0 International license (Please read the
license terms here. https://creativecommons.org/licenses/by/4.0/).
Downloading this dataset implies agreement to follow the same
conditions for any modification and/or
re-distribution of the dataset in any form.
Additionally any entity using this dataset agrees to the following conditions:
THIS DATASET IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Please cite [1,2] below if you make use of the dataset.
[1] J. S. Chung, A. Nagrani, A. Zisserman
VoxCeleb2: Deep Speaker Recognition
INTERSPEECH, 2018.
[2] A. Nagrani, J. S. Chung, A. Zisserman
VoxCeleb: a large-scale speaker identification dataset
INTERSPEECH, 2017
### Citation Information
```
@inproceedings{shon2022slue,
title={Slue: New benchmark tasks for spoken language understanding evaluation on natural speech},
author={Shon, Suwon and Pasad, Ankita and Wu, Felix and Brusco, Pablo and Artzi, Yoav and Livescu, Karen and Han, Kyu J},
booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7927--7931},
year={2022},
organization={IEEE}
}
```
### Contributions
Thanks to [@fwu-asapp](https://github.com/fwu-asapp) for adding this dataset. |
bond005/sberdevices_golos_100h_farfield | 2022-10-27T04:23:04.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended",
"language:ru",
"license:other",
"arxiv:2106.10161",
"region:us"
] | bond005 | null | null | null | 0 | 84 | ---
pretty_name: Golos
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- ru
license:
- other
multilinguality:
- monolingual
paperswithcode_id: golos
size_categories:
- 10K<n<100k
source_datasets:
- extended
task_categories:
- automatic-speech-recognition
- audio-classification
---
# Dataset Card for sberdevices_golos_100h_farfield
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Golos ASR corpus](https://www.openslr.org/114)
- **Repository:** [Golos dataset](https://github.com/sberdevices/golos)
- **Paper:** [Golos: Russian Dataset for Speech Research](https://arxiv.org/pdf/2106.10161.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Nikolay Karpov](mailto:karpnv@gmail.com)
### Dataset Summary
Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.
Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.
This dataset is a simpler version of the above mentioned Golos:
- it includes the farfield domain only (without any sound from the crowd domain);
- validation split is built on the 10-hour training subset;
- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;
- test split is a full original test split.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
```
{'audio': {'path': None,
'array': array([ 1.22070312e-04, 1.22070312e-04, 9.15527344e-05, ...,
6.10351562e-05, 6.10351562e-05, 3.05175781e-05]), dtype=float64),
'sampling_rate': 16000},
'transcription': 'джой источники истории турции'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: the transcription of the audio file.
### Data Splits
This dataset is a simpler version of the original Golos:
- it includes the farfield domain only (without any sound from the crowd domain);
- validation split is built on the 10-hour training subset;
- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;
- test split is a full original test split.
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 9570 | 933 | 1916 |
| hours | 10.3h | 1.0h | 1.4h |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
All recorded audio files were manually annotated on the crowd-sourcing platform.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.
### Licensing Information
[Public license with attribution and conditions reserved](https://github.com/sberdevices/golos/blob/master/license/en_us.pdf)
### Citation Information
```
@misc{karpov2021golos,
author = {Karpov, Nikolay and Denisenko, Alexander and Minkin, Fedor},
title = {Golos: Russian Dataset for Speech Research},
publisher = {arXiv},
year = {2021},
url = {https://arxiv.org/abs/2106.10161}
}
```
### Contributions
Thanks to [@bond005](https://github.com/bond005) for adding this dataset.
|
fahamu/ioi | 2022-11-28T03:58:50.000Z | [
"license:mit",
"doi:10.57967/hf/0142",
"region:us"
] | fahamu | null | null | null | 2 | 84 | ---
license: mit
---
# Dataset Release: Indirect Object Identification
`mecha_ioi` is a pair of datasets tailored for the Indirect Object Identification task, where sentences are generated from the following set of templates:
- BABA
```
baba_templates = [
"Then, {B} and {A} went to the {PLACE}. {B} gave a {OBJECT} to {A}",
"Then, {B} and {A} had a lot of fun at the {PLACE}. {B} gave a {OBJECT} to {A}",
"Then, {B} and {A} were working at the {PLACE}. {B} decided to give a {OBJECT} to {A}",
"Then, {B} and {A} were thinking about going to the {PLACE}. {B} wanted to give a {OBJECT} to {A}",
"Then, {B} and {A} had a long argument, and afterwards {B} said to {A}",
"After {B} and {A} went to the {PLACE}, {B} gave a {OBJECT} to {A}",
"When {B} and {A} got a {OBJECT} at the {PLACE}, {B} decided to give it to {A}",
"When {B} and {A} got a {OBJECT} at the {PLACE}, {B} decided to give the {OBJECT} to {A}",
"While {B} and {A} were working at the {PLACE}, {B} gave a {OBJECT} to {A}",
"While {B} and {A} were commuting to the {PLACE}, {B} gave a {OBJECT} to {A}",
"After the lunch, {B} and {A} went to the {PLACE}. {B} gave a {OBJECT} to {A}",
"Afterwards, {B} and {A} went to the {PLACE}. {B} gave a {OBJECT} to {A}",
"Then, {B} and {A} had a long argument. Afterwards {B} said to {A}",
"The {PLACE} {B} and {A} went to had a {OBJECT}. {B} gave it to {A}",
"Friends {B} and {A} found a {OBJECT} at the {PLACE}. {B} gave it to {A}",
]
```
- ABBA
```
abba_templates = [
"Then, {A} and {B} went to the {PLACE}. {B} gave a {OBJECT} to {A}",
"Then, {A} and {B} had a lot of fun at the {PLACE}. {B} gave a {OBJECT} to {A}",
"Then, {A} and {B} were working at the {PLACE}. {B} decided to give a {OBJECT} to {A}",
"Then, {A} and {B} were thinking about going to the {PLACE}. {B} wanted to give a {OBJECT} to {A}",
"Then, {A} and {B} had a long argument, and afterwards {B} said to {A}",
"After {A} and {B} went to the {PLACE}, {B} gave a {OBJECT} to {A}",
"When {A} and {B} got a {OBJECT} at the {PLACE}, {B} decided to give it to {A}",
"When {A} and {B} got a {OBJECT} at the {PLACE}, {B} decided to give the {OBJECT} to {A}",
"While {A} and {B} were working at the {PLACE}, {B} gave a {OBJECT} to {A}",
"While {A} and {B} were commuting to the {PLACE}, {B} gave a {OBJECT} to {A}",
"After the lunch, {A} and {B} went to the {PLACE}. {B} gave a {OBJECT} to {A}",
"Afterwards, {A} and {B} went to the {PLACE}. {B} gave a {OBJECT} to {A}",
"Then, {A} and {B} had a long argument. Afterwards {B} said to {A}",
"The {PLACE} {A} and {B} went to had a {OBJECT}. {B} gave it to {A}",
"Friends {A} and {B} found a {OBJECT} at the {PLACE}. {B} gave it to {A}",
]
```
The purpose of this dataset is to facilitate interpretability research, inspired by the paper
_Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small_, from Redwood Research. We are not affiliated with Redwood Research,
and release this dataset to contribute to the collective research effort behind understanding how Transformer language models perform this task.
### BibTex
```
@misc {fahamu_2022,
author = { {Fahamu} },
title = { ioi (Revision 223da8b) },
year = 2022,
url = { https://huggingface.co/datasets/fahamu/ioi },
doi = { 10.57967/hf/0142 },
publisher = { Hugging Face }
}
``` |
cjvt/sloleks | 2022-12-21T14:42:09.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | cjvt | Sloleks is a reference morphological lexicon of Slovene that was developed to be used in various NLP applications and language manuals. \
It contains Slovene lemmas, their inflected or derivative word forms and the corresponding grammatical description. In addition to the approx. 100,000 entries already available in Sloleks 2.0, Sloleks 3.0 contains an additional cca. 265,000 newly generated entries from the most frequent lemmas in Gigafida 2.0 not yet included in previous versions of Sloleks. For verbs, adjectives, adverbs, and common nouns, the lemmas were checked manually by three annotators and \
included in Sloleks only if confirmed as legitimate by at least one annotator. No manual checking was performed on proper nouns. | @misc{sloleks3,
title = {Morphological lexicon Sloleks 3.0},
author = {{\v C}ibej, Jaka and Gantar, Kaja and Dobrovoljc, Kaja and Krek, Simon and Holozan, Peter and Erjavec, Toma{\v z} and Romih, Miro and Arhar Holdt, {\v S}pela and Krsnik, Luka and Robnik-{\v S}ikonja, Marko},
url = {http://hdl.handle.net/11356/1745},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2022}
} | null | 0 | 84 | ---
license: cc-by-sa-4.0
---
# Dataset Card for Sloleks 3
**Important**: this is a minimal script for processing Sloleks 3. Most notably, some word form properties (accentuation, pronounciation) and frequencies are not exposed here.
Please see the [CLARIN repository](https://www.clarin.si/repository/xmlui/handle/11356/1745) for full details on what the dataset contains, and open an issue or a pull request if you require some other information from the raw data.
### Dataset Summary
Sloleks is a reference morphological lexicon of Slovene that was developed to be used in various NLP applications and language manuals.
It contains Slovene lemmas, their inflected or derivative word forms and the corresponding grammatical description.
In addition to the approx. 100,000 entries already available in [Sloleks 2.0](http://hdl.handle.net/11356/1230), Sloleks 3.0 contains an additional
cca. 265,000 newly generated entries from the most frequent lemmas in [Gigafida 2.0](http://hdl.handle.net/11356/1320) not yet included in previous versions of Sloleks.
For verbs, adjectives, adverbs, and common nouns, the lemmas were checked manually by three annotators and included in Sloleks only if confirmed as legitimate by at
least one annotator. No manual checking was performed on proper nouns. Lemmatization rules, part-of-speech categorization and the set of feature-value pairs follow the
[MULTEXT-East morphosyntactic specifications for Slovenian](https://nl.ijs.si/ME/V6/msd/html/msd-sl.html).
### Supported Tasks and Leaderboards
Other (the data is a knowledge base - lexicon).
### Languages
Slovenian.
## Dataset Structure
### Data Instances
Entry for the verb `absorbirati` (English: *to absorb*):
```
{
'headword_lemma': 'absorbirati',
'pos': 'verb',
'lex_unit': {'id': 'LE_a293f9ab871299f116dff2cc1421367a', 'form': 'absorbirati', 'key': 'G_absorbirati', 'type': 'single'},
'word_forms':
[
{'forms': ['absorbirati'], 'msd': 'Ggvn'},
{'forms': ['absorbirat'], 'msd': 'Ggvm'},
{'forms': ['absorbiral'], 'msd': 'Ggvd-em'},
{'forms': ['absorbirala'], 'msd': 'Ggvd-dm'},
{'forms': ['absorbirali'], 'msd': 'Ggvd-mm'},
{'forms': ['absorbirala'], 'msd': 'Ggvd-ez'},
{'forms': ['absorbirali'], 'msd': 'Ggvd-dz'},
{'forms': ['absorbirale'], 'msd': 'Ggvd-mz'},
{'forms': ['absorbiralo'], 'msd': 'Ggvd-es'},
{'forms': ['absorbirali'], 'msd': 'Ggvd-ds'},
{'forms': ['absorbirala'], 'msd': 'Ggvd-ms'},
{'forms': ['absorbiram'], 'msd': 'Ggvspe'},
{'forms': ['absorbiraš'], 'msd': 'Ggvsde'},
{'forms': ['absorbira'], 'msd': 'Ggvste'},
{'forms': ['absorbirava'], 'msd': 'Ggvspd'},
{'forms': ['absorbirata'], 'msd': 'Ggvsdd'},
{'forms': ['absorbirata'], 'msd': 'Ggvstd'},
{'forms': ['absorbiramo'], 'msd': 'Ggvspm'},
{'forms': ['absorbirate'], 'msd': 'Ggvsdm'},
{'forms': ['absorbirajo'], 'msd': 'Ggvstm'},
{'forms': ['absorbirajva'], 'msd': 'Ggvvpd'},
{'forms': ['absorbirajmo'], 'msd': 'Ggvvpm'},
{'forms': ['absorbiraj'], 'msd': 'Ggvvde'},
{'forms': ['absorbirajta'], 'msd': 'Ggvvdd'},
{'forms': ['absorbirajte'], 'msd': 'Ggvvdm'}
],
'is_manually_checked': True
}
```
### Data Fields
- `headword_lemma`: lemma of the headword;
- `pos`: coarse-grained part-of-speech tag (one of `{"noun", "verb", "adjective", "adverb", "pronoun", "numeral", "preposition", "conjunction", "particle", "interjection", "abbreviation", "residual"}`);
- `lex_unit`: properties of the lexical unit corresponding to the headword (`id`, `form`, `key` and `type`);
- `word_forms`: forms of the headword, each with its own list of possible forms and the morphosyntactic description of the form;
- `is_manually_checked`: whether the headword was manually validated or not.
## Additional Information
### Dataset Curators
Jaka Čibej; et al. (please see http://hdl.handle.net/11356/1745 for the full list).
### Licensing Information
CC BY-SA 4.0.
### Citation Information
```
@misc{sloleks3,
title = {Morphological lexicon Sloleks 3.0},
author = {{\v C}ibej, Jaka and Gantar, Kaja and Dobrovoljc, Kaja and Krek, Simon and Holozan, Peter and Erjavec, Toma{\v z} and Romih, Miro and Arhar Holdt, {\v S}pela and Krsnik, Luka and Robnik-{\v S}ikonja, Marko},
url = {http://hdl.handle.net/11356/1745},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
tasksource/babi_nli | 2023-06-05T09:05:59.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:bsd",
"logical reasoning",
"nli",
"natural-language-inference",
"reasoning",
"logic",
"region:us"
] | tasksource | bAbi tasks recasted as natural language inference. | null | null | 1 | 84 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license: bsd
multilinguality:
- monolingual
pretty_name: babi_nli
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
tags:
- logical reasoning
- nli
- natural-language-inference
- reasoning
- logic
---
# bAbi_nli
bAbI tasks recasted as natural language inference.
https://github.com/facebookarchive/bAbI-tasks
tasksource recasting code:
https://colab.research.google.com/drive/1J_RqDSw9iPxJSBvCJu-VRbjXnrEjKVvr?usp=sharing
```bibtex
@article{weston2015towards,
title={Towards ai-complete question answering: A set of prerequisite toy tasks},
author={Weston, Jason and Bordes, Antoine and Chopra, Sumit and Rush, Alexander M and Van Merri{\"e}nboer, Bart and Joulin, Armand and Mikolov, Tomas},
journal={arXiv preprint arXiv:1502.05698},
year={2015}
}
``` |
Aeala/ShareGPT_Vicuna_unfiltered | 2023-06-01T07:03:50.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | Aeala | null | null | null | 10 | 84 | ---
license: apache-2.0
language:
- en
---
## Dataset Card
This is a reupload of [this dataset](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) that was further cleaned by gozfarb. |
C-MTEB/STSB | 2023-07-28T13:40:47.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 84 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: int32
splits:
- name: train
num_bytes: 639550
num_examples: 5231
- name: validation
num_bytes: 197381
num_examples: 1458
- name: test
num_bytes: 158230
num_examples: 1361
download_size: 682182
dataset_size: 995161
---
# Dataset Card for "STSB"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lavita/ChatDoctor-HealthCareMagic-100k | 2023-09-09T07:40:38.000Z | [
"region:us"
] | lavita | null | null | null | 0 | 84 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 126454896
num_examples: 112165
download_size: 70518148
dataset_size: 126454896
---
# Dataset Card for "ChatDoctor-HealthCareMagic-100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LabHC/moji | 2023-09-28T09:12:22.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | LabHC | null | null | null | 0 | 84 | ---
task_categories:
- text-classification
language:
- en
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: sa
dtype: int64
splits:
- name: train
num_bytes: 128596235
num_examples: 1613790
- name: test
num_bytes: 35731728
num_examples: 448276
- name: dev
num_bytes: 14325121
num_examples: 179310
download_size: 93470968
dataset_size: 178653084
---
The Moji dataset (Blodgett et al., 2016) (http://slanglab.cs.umass.edu/TwitterAAE/) contains tweets used for sentiment analysis (either positive or negative sentiment), with additional information on the type of English used in the tweets which is a sensitive attribute considered in fairness-aware approaches (African-American English (AAE) or Standard-American English (SAE)).
The type of language is determined thanks to a supervised model. Only the data
where the sensitive attribute is predicted with a certainty rate above a given threshold are kept.
Based on this principle we make available two versions of the Moji dataset,
respectively with a threshold of 80% and of 90%. The dataset's distributions are presented below.
### Dataset with 80% threshold
| | Positive sentiment | Negative Sentiment | Total |
|---|---|---|---|
AAE | 73 013 | 44 023 | 117 036 |
SAE | 1 471 427 | 652 913 | 2 124 340 |
Total | 1 544 440 | 696 936 | 2 241 376 |
To load this dataset, use the following code :
```python
dataset = load_dataset("LabHC/moji", revision='moji_conf_08')
```
or by default the version is the dataset with 80% threshold
```python
dataset = load_dataset("LabHC/moji")
```
### Dataset with 90% threshold
| | Positive sentiment | Negative Sentiment | Total |
|---|---|---|---|
AAE | 30 827 | 18 409 | 49 236 |
SAE | 793 867 | 351 600 | 1 145 467 |
Total | 824 694 | 370 009 | 1 194 703 |
To load this dataset, use the following code :
```python
dataset = load_dataset("LabHC/moji", revision='moji_conf_09')
```
----
[Demographic Dialectal Variation in Social Media: A Case Study of African-American English](https://aclanthology.org/D16-1120) (Blodgett et al., EMNLP 2016) |
phongmt184172/python_data_27k | 2023-10-05T02:30:02.000Z | [
"region:us"
] | phongmt184172 | null | null | null | 0 | 84 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 39244063.17425801
num_examples: 19056
- name: test
num_bytes: 8410618.912870996
num_examples: 4084
- name: val
num_bytes: 8410618.912870996
num_examples: 4084
download_size: 23588770
dataset_size: 56065301.0
---
# Dataset Card for "python_data_27k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dmrau/trec_dl20 | 2023-10-09T08:28:56.000Z | [
"license:unknown",
"region:us"
] | dmrau | null | null | null | 0 | 84 | ---
license: unknown
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 2816
num_examples: 54
- name: corpus
num_bytes: 4099348
num_examples: 10446
download_size: 2356188
dataset_size: 4102164
---
|
deepset/germandpr | 2023-04-06T13:59:37.000Z | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:extractive-qa",
"task_ids:closed-domain-qa",
"multilinguality:monolingual",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"arxiv:2104.12741",
"region:us"
] | deepset | We take GermanQuAD as a starting point and add hard negatives from a dump of the full German Wikipedia following the approach of the DPR authors (Karpukhin et al., 2020). The format of the dataset also resembles the one of DPR. GermanDPR comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set. For each pair, there are one positive context and three hard negative contexts. | @misc{möller2021germanquad,
title={GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval},
author={Timo Möller and Julian Risch and Malte Pietsch},
year={2021},
eprint={2104.12741},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 7 | 83 | ---
language:
- de
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- question-answering
- text-retrieval
task_ids:
- extractive-qa
- closed-domain-qa
thumbnail: >-
https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
license: cc-by-4.0
---

# Dataset Card for germandpr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://deepset.ai/germanquad
- **Repository:** https://github.com/deepset-ai/haystack
- **Paper:** https://arxiv.org/abs/2104.12741
### Dataset Summary
We take GermanQuAD as a starting point and add hard negatives from a dump of the full German Wikipedia following the approach of the DPR authors (Karpukhin et al., 2020). The format of the dataset also resembles the one of DPR. GermanDPR comprises 9275 question/answerpairs in the training set and 1025 pairs in the test set. For eachpair, there are one positive context and three hard negative contexts.
### Supported Tasks and Leaderboards
- `open-domain-qa`, `text-retrieval`: This dataset is intended to be used for `open-domain-qa` and text retrieval tasks.
### Languages
The sentences in the dataset are in German (de).
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
"question": "Wie viele christlichen Menschen in Deutschland glauben an einen Gott?",
"answers": [
"75 % der befragten Katholiken sowie 67 % der Protestanten glaubten an einen Gott (2005: 85 % und 79 %)"
],
"positive_ctxs": [
{
"title": "Gott",
"text": "Gott\
=== Demografie ===
Eine Zusammenfassung von Umfrageergebnissen aus verschiedenen Staaten ergab im Jahr 2007, dass es weltweit zwischen 505 und 749 Millionen Atheisten und Agnostiker gibt. Laut der Encyclopædia Britannica gab es 2009 weltweit 640 Mio. Nichtreligiöse und Agnostiker (9,4 %), und weitere 139 Mio. Atheisten (2,0 %), hauptsächlich in der Volksrepublik China.\\\\\\\\
Bei einer Eurobarometer-Umfrage im Jahr 2005 wurde festgestellt, dass 52 % der damaligen EU-Bevölkerung glaubt, dass es einen Gott gibt. Eine vagere Frage nach dem Glauben an „eine andere spirituelle Kraft oder Lebenskraft“ wurde von weiteren 27 % positiv beantwortet. Bezüglich der Gottgläubigkeit bestanden große Unterschiede zwischen den einzelnen europäischen Staaten. Die Umfrage ergab, dass der Glaube an Gott in Staaten mit starkem kirchlichen Einfluss am stärksten verbreitet ist, dass mehr Frauen (58 %) als Männer (45 %) an einen Gott glauben und dass der Gottglaube mit höherem Alter, geringerer Bildung und politisch rechtsgerichteten Ansichten korreliert.\\\\\\\\
Laut einer Befragung von 1003 Personen in Deutschland im März 2019 glauben 55 % an einen Gott; 2005 waren es 66 % gewesen. 75 % der befragten Katholiken sowie 67 % der Protestanten glaubten an einen Gott (2005: 85 % und 79 %). Unter Konfessionslosen ging die Glaubensquote von 28 auf 20 % zurück. Unter Frauen (60 %) war der Glauben 2019 stärker ausgeprägt als unter Männern (50 %), in Westdeutschland (63 %) weiter verbreitet als in Ostdeutschland (26 %).",
"passage_id": ""
}
],
"negative_ctxs": [],
"hard_negative_ctxs": [
{
"title": "Christentum",
"text": "Christentum\
\
=== Ursprung und Einflüsse ===\
Die ersten Christen waren Juden, die zum Glauben an Jesus Christus fanden. In ihm erkannten sie den bereits durch die biblische Prophetie verheißenen Messias (hebräisch: ''maschiach'', griechisch: ''Christos'', latinisiert ''Christus''), auf dessen Kommen die Juden bis heute warten. Die Urchristen übernahmen aus der jüdischen Tradition sämtliche heiligen Schriften (den Tanach), wie auch den Glauben an einen Messias oder Christus (''christos'': Gesalbter). Von den Juden übernommen wurden die Art der Gottesverehrung, das Gebet der Psalmen u. v. a. m. Eine weitere Gemeinsamkeit mit dem Judentum besteht in der Anbetung desselben Schöpfergottes. Jedoch sehen fast alle Christen Gott als ''einen'' dreieinigen Gott an: den Vater, den Sohn (Christus) und den Heiligen Geist. Darüber, wie der dreieinige Gott konkret gedacht werden kann, gibt es unter den christlichen Konfessionen und Gruppierungen unterschiedliche Auffassungen bis hin zur Ablehnung der Dreieinigkeit Gottes (Antitrinitarier). Der Glaube an Jesus Christus führte zu Spannungen und schließlich zur Trennung zwischen Juden, die diesen Glauben annahmen, und Juden, die dies nicht taten, da diese es unter anderem ablehnten, einen Menschen anzubeten, denn sie sahen in Jesus Christus nicht den verheißenen Messias und erst recht nicht den Sohn Gottes. Die heutige Zeitrechnung wird von der Geburt Christi aus gezählt. Anno Domini (A. D.) bedeutet „im Jahr des Herrn“.",
"passage_id": ""
},
{
"title": "Noachidische_Gebote",
"text": "Noachidische_Gebote\
\
=== Die kommende Welt ===\
Der Glaube an eine ''Kommende Welt'' (Olam Haba) bzw. an eine ''Welt des ewigen Lebens'' ist ein Grundprinzip des Judentums. Dieser jüdische Glaube ist von dem christlichen Glauben an das ''Ewige Leben'' fundamental unterschieden. Die jüdische Lehre spricht niemandem das Heil dieser kommenden Welt ab, droht aber auch nicht mit Höllenstrafen im Jenseits. Juden glauben schlicht, dass allen Menschen ein Anteil der kommenden Welt zuteilwerden kann. Es gibt zwar viele Vorstellungen der kommenden Welt, aber keine kanonische Festlegung ihrer Beschaffenheit; d. h., das Judentum kennt keine eindeutige Antwort darauf, was nach dem Tod mit uns geschieht. Die Frage nach dem Leben nach dem Tod wird auch als weniger wesentlich angesehen, als Fragen, die das Leben des Menschen auf Erden und in der Gesellschaft betreffen.\
Der jüdische Glaube an eine kommende Welt bedeutet nicht, dass Menschen, die nie von der Tora gehört haben, böse oder sonst minderwertige Menschen sind. Das Judentum lehrt den Glauben, dass alle Menschen mit Gott verbunden sind. Es gibt im Judentum daher keinen Grund, zu missionieren. Das Judentum lehrt auch, dass alle Menschen sich darin gleichen, dass sie weder prinzipiell gut noch böse sind, sondern eine Neigung zum Guten wie zum Bösen haben. Während des irdischen Lebens sollte sich der Mensch immer wieder für das Gute entscheiden.",
"passage_id": ""
},
{
"title": "Figuren_und_Schauplätze_der_Scheibenwelt-Romane",
"text": "Figuren_und_Schauplätze_der_Scheibenwelt-Romane\
\
=== Herkunft ===\
Es gibt unzählig viele Götter auf der Scheibenwelt, die so genannten „geringen Götter“, die überall sind, aber keine Macht haben. Erst wenn sie durch irgendein Ereignis Gläubige gewinnen, werden sie mächtiger. Je mehr Glauben, desto mehr Macht. Dabei nehmen sie die Gestalt an, die die Menschen ihnen geben (zum Beispiel Offler). Wenn ein Gott mächtig genug ist, erhält er Einlass in den Cori Celesti, den Berg der Götter, der sich in der Mitte der Scheibenwelt erhebt. Da Menschen wankelmütig sind, kann es auch geschehen, dass sie den Glauben verlieren und einen Gott damit entmachten (s. „Einfach Göttlich“).",
"passage_id": ""
}
]
},
```
### Data Fields
- `positive_ctxs`: a dictionary feature containing:
- `title`: a `string` feature.
- `text`: a `string` feature.
- `passage_id`: a `string` feature.
- `negative_ctxs`: a dictionary feature containing:
- `title`: a `string` feature.
- `text`: a `string` feature.
- `passage_id`: a `string` feature.
- `hard_negative_ctxs`: a dictionary feature containing:
- `title`: a `string` feature.
- `text`: a `string` feature.
- `passage_id`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a list feature containing:
- a `string` feature.
### Data Splits
The dataset is split into a training set and a test set.
The final GermanDPR dataset comprises 9275
question/answer pairs in the training set and 1025
pairs in the test set. For each pair, there are one
positive context and three hard negative contexts.
| |questions|answers|positive contexts|hard negative contexts|
|------|--------:|------:|----------------:|---------------------:|
|train|9275| 9275|9275|27825|
|test|1025| 1025|1025|3075|
## Additional Information
### Dataset Curators
The dataset was initially created by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at deepset.ai
### Citation Information
```
@misc{möller2021germanquad,
title={GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval},
author={Timo Möller and Julian Risch and Malte Pietsch},
year={2021},
eprint={2104.12741},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
philschmid/germeval18 | 2022-02-28T17:14:55.000Z | [
"region:us"
] | philschmid | null | null | null | 2 | 83 | Entry not found |
pmc/open_access | 2023-03-14T17:29:59.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"license:cc-by-4.0",
"license:cc-by-sa-4.0",
"license:cc-by-nd-4.0",
"license:cc-by-nc-4.0",
"license:cc-by-nc-sa-4.0",
"license:cc-by-nc-nd-4.0",
"license:other",
"license:unknown",
"region:us"
] | pmc | The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets | PMC Open Access Subset [Internet]. Bethesda (MD): National Library of Medicine. 2003 - [cited YEAR MONTH DAY]. Available from https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/ | null | 11 | 83 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
- cc-by-4.0
- cc-by-sa-4.0
- cc-by-nd-4.0
- cc-by-nc-4.0
- cc-by-nc-sa-4.0
- cc-by-nc-nd-4.0
- other
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: PMC Open Access
---
# Dataset Card for PMC Open Access Subset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [PubMed Central](mailto:pubmedcentral@ncbi.nlm.nih.gov)
### Dataset Summary
The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse. Not all articles in PMC are available for text mining and other reuse, many have
copyright protection, however articles in the PMC Open Access Subset are made available under Creative Commons or
similar licenses that generally allow more liberal redistribution and reuse than a traditional copyrighted work. The
PMC Open Access Subset is one part of the PMC Article Datasets.
Within the PMC Open Access Subset, there are three groupings:
- Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
- Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
- Other - no machine-readable Creative Commons license, no license, or a custom license.
### Supported Tasks and Leaderboards
- Language modeling
### Languages
English (`en`).
## Dataset Structure
### Data Instances
```
{
'text': "==== Front\nPLoS BiolPLoS BiolpbioplosbiolPLoS Biology1544-91731545-7885Public Library of Science San Francisco, USA 10.1371/journal.pbio.0000005Research ArticleGenetics/Genomics/Gene TherapyInfectious DiseasesMicrobiologyPlasmodiumThe Transcriptome of the Intraerythrocytic Developmental Cycle of Plasmodium falciparum\n P. falciparum IDC TranscriptomeBozdech Zbynek \n1\nLlinás Manuel \n1\nPulliam Brian Lee \n1\nWong Edith D \n1\nZhu Jingchun \n2\nDeRisi Joseph L joe@derisilab.ucsf.edu\n1\n1Department of Biochemistry and Biophysics, University of California, San FranciscoSan Francisco, CaliforniaUnited States of America2Department of Biological and Medical Informatics, University of California, San FranciscoSan Francisco, CaliforniaUnited States of America10 2003 18 8 2003 18 8 2003 1 1 e512 6 2003 25 7 2003 Copyright: ©2003 Bozdech et al.2003This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.\nMicroarray Analysis: Genome-Scale Hypothesis Scanning \n\nMonitoring Malaria: Genomic Activity of the Parasite in Human Blood Cells \n\nPlasmodium falciparum is the causative agent of the most burdensome form of human malaria, affecting 200–300 million individuals per year worldwide. The recently sequenced genome of P. falciparum revealed over 5,400 genes, of which 60% encode proteins of unknown function. Insights into the biochemical function and regulation of these genes will provide the foundation for future drug and vaccine development efforts toward eradication of this disease. By analyzing the complete asexual intraerythrocytic developmental cycle (IDC) transcriptome of the HB3 strain of P. falciparum, we demonstrate that at least 60% of the genome is transcriptionally active during this stage. Our data demonstrate that this parasite has evolved an extremely specialized mode of transcriptional regulation that produces a continuous cascade of gene expression, beginning with genes corresponding to general cellular processes, such as protein synthesis, and ending with Plasmodium-specific functionalities, such as genes involved in erythrocyte invasion. The data reveal that genes contiguous along the chromosomes are rarely coregulated, while transcription from the plastid genome is highly coregulated and likely polycistronic. Comparative genomic hybridization between HB3 and the reference genome strain (3D7) was used to distinguish between genes not expressed during the IDC and genes not detected because of possible sequence variations...
'pmid': '12929205',
'accession_id': 'PMC176545',
'license': 'CC BY',
'last_updated': '2021-01-05 08:21:03',
'retracted': 'no',
'citation': 'PLoS Biol. 2003 Oct 18; 1(1):e5'
}
```
### Data Fields
- `text`: Text content.
- `pmid`: PubMed ID.
- `accession_id`: Unique identifier for a sequence record.
- `license`: License type.
- `last_updated`: Date of last update.
- `retracted`: Whether retracted or not.
- `citation`: Citation reference.
### Data Splits
The dataset is not split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License terms vary. Please refer to the license statement in each article for specific terms of use.
Within the PMC Open Access Subset, there are three groupings based on available license terms:
- Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses;
- Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
- Other - no machine-readable Creative Commons license, no license, or a custom license.
### Citation Information
```
PMC Open Access Subset [Internet]. Bethesda (MD): National Library of Medicine. 2003 - [cited YEAR MONTH DAY]. Available from https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
facebook/pmd | 2022-08-09T23:51:39.000Z | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2112.04482",
"arxiv:2111.11431",
"region:us"
] | facebook | Introduced in FLAVA paper, Public Multimodal Dataset (PMD) is a collection of publicly-available image-text pairs datasets. PMD in total contains 70M image-text pairs with 68M unique images. The dataset contains pairs from Conceptual Captions, Conceptual Captions 12M, WIT, Localized Narratives, RedCaps, COCO, SBU Captions, Visual Genome and a subset of YFCC100M dataset. | @inproceedings{singh2022flava,
title={Flava: A foundational language and vision alignment model},
author={Singh, Amanpreet and Hu, Ronghang and Goswami, Vedanuj and Couairon, Guillaume and Galuba, Wojciech and Rohrbach, Marcus and Kiela, Douwe},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={15638--15650},
year={2022}
} | null | 26 | 83 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: pmd
pretty_name: PMD
extra_gated_prompt: |
By clicking on “Access repository” below, you also agree to individual licensing terms for each of the subset datasets of the PMD as noted at https://huggingface.co/datasets/facebook/pmd#additional-information.
---
# Dataset Card for PMD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Compared to original FLAVA paper](#compared-to-original-flava-paper)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PMD homepage](https://flava-model.github.io/)
- **Repository:** [PMD repository](https://huggingface.co/datasets/facebook/pmd)
- **Paper:** [FLAVA: A Foundational Language And Vision Alignment Model
](https://arxiv.org/abs/2112.04482)
- **Leaderboard:**
- **Point of Contact:** [Amanpreet Singh](mailto:amanpreet@nyu.edu)
### Dataset Summary
Introduced in the FLAVA paper, Public Multimodal Dataset (PMD) is a collection of publicly-available image-text pair datasets. PMD contains 70M image-text pairs in total with 68M unique images. The dataset contains pairs from Conceptual Captions, Conceptual Captions 12M, WIT, Localized Narratives, RedCaps, COCO, SBU Captions, Visual Genome and a subset of YFCC100M dataset.
If you use PMD, please cite the original FLAVA paper as follows, along with the individual datasets (!! - see below for references):
```bibtex
@inproceedings{singh2022flava,
title={Flava: A foundational language and vision alignment model},
author={Singh, Amanpreet and Hu, Ronghang and Goswami, Vedanuj and Couairon, Guillaume and Galuba, Wojciech and Rohrbach, Marcus and Kiela, Douwe},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={15638--15650},
year={2022}
}
```
You can load this dataset by first logging into Hugging Face using `huggingface-cli login` and then running the following commands:
```py
from datasets import load_dataset
pmd = load_dataset("facebook/pmd", use_auth_token=True)
```
You can also load the dataset in streaming mode if you don't want to download the big dataset files (> 50GB locally without the images):
```py
pmd = load_dataset("facebook/pmd", use_auth_token=True, streaming=True)
```
### Dataset Preprocessing
This dataset doesn't download all of the images locally by default. Instead, it exposes URLs for some of the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_data, timeout=None, retries=0):
image_url, image = image_data
if image is not None:
return image
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, zip(batch["image_url"], batch["image"])))
return batch
num_threads = 20
dset = load_dataset("pmd", use_auth_token=True)
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
#### Save to disk
You can also save the dataset to disk for faster and direct loading next time but beware of the space required:
```py
dset.save_to_disk(</path/to/save>)
```
#### Load Subsets
You can also download a specific set from the PMD dataset by using
```py
dset = load_dataset("pmd", <choice>, use_auth_token=True)
```
The choices are `
```
"all","coco","sbu", "wit", "localized_narratives","conceptual_captions","visual_genome","conceptual_captions_12M","redcaps","yfcc100M_subset", "localized_narratives_openimages","localized_narratives_ade20k", "localized_narratives_coco"
```
#### Flickr30K Localized Narratives Subset
The Flickr30K subset of Localized Narratives is not included by default as it requires a manual download. You can include it by downloading the tar file from [here](http://shannon.cs.illinois.edu/DenotationGraph/data/index.html) after signing an agreement to `</path/to/Downloads>` and then loading it whole PMD or localized narratives subset by:
```py
dset = load_dataset("pmd", data_dir=</path/to/Downloads/flickr30k-images.tar.gz>, use_auth_token=True, use_flickr30k_ln=True)
# Load LN subset only
dset = load_dataset("pmd", "localized_narratives", data_dir=</path/to/Downloads/flickr30k-images.tar.gz>, use_auth_token=True, use_flickr30k_ln=True)
```
#### Facing issues?
If you are facing issues, you can try loading a specific revision of the repo by using:
```py
dset = load_dataset("pmd", use_auth_token=True, revision="311cd48")
```
### Supported Tasks and Leaderboards
In the FLAVA paper, the dataset has been used to pretrain the FLAVA model as a source of well-aligned image-text pairs. This allows having a generic vision-and-language model which can be fine-tuned for a variety of tasks.
We anticipate that the dataset can be used to train deep neural networks that perform image captioning and that learn transferable visual representations for a variety of downstream visual recognition tasks (image classification, object detection, instance segmentation). We also anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks, such as image or text retrieval or text-to-image synthesis.
### Languages
All of the subsets in PMD use English as their primary language.
## Dataset Structure
### Data Instances
Each instance in PMD represents a single image-text pair:
```
{
'image_url': None,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCFF86A1E80>,
'text': 'A woman wearing a net on her head cutting a cake. ',
'source': 'coco',
'meta': '{\n "annotation": [\n "A woman wearing a net on her head cutting a cake. "\n ],\n "image_path": "zip:/val2014/COCO_val2014_000000522418.jpg::http:/images.cocodataset.org/zips/val2014.zip"\n}'
}
```
### Data Fields
- `image_url`: Static URL for downloading the image associated with the text. Can be `None` if image is locally available.
- `image`: A PIL Image object for the image associated with the text. Can be `None` if image is not locally available.
- `text`: `str`, A textual description corresponding to the image.
- `source`: `str`, The PMD subset which this pair is from.
- `meta`: `str`, A json representation of the original annotation from the dataset.
### Data Splits
All the data is contained in the training set. The training set has nearly 70M instances.
We intend for this dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Thus, all of the instances should be used for pretraining. If required, we specifically make sure that there is no overlap with Karpathy's COCO validation set so users can use that subset for any validation purposes. Users can also load Karpathy's val subset by specifying the "validation" split while loading PMD. This will also load other "validation" splits for some subsets, if they are available.
## Dataset Creation
### Curation Rationale
From the paper:
> Purely contrastive methods, however, also have important shortcomings. Their cross-modal nature does not make them easily usable on multimodal problems that require dealing with both modalities at the same time. They require large corpora, which for both CLIP and ALIGN have not been made accessible to the research community and the details of which remain shrouded in mystery, notwithstanding well-known issues with the construction of such datasets
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> **Data Collection Pipeline**
- For the YFCC100M dataset, we filter the image-text data by discarding non-English captions and only keeping captions that contain more than two words from the description field of each image, if this does not pass our filters we consider the title field. Other than that, we did not do any additional filtering.
- For the VisualGenome, COCO and Localized Narratives subsets, we remove any overlaps with Karpathy's COCO val and test sets.
- For Localized Narratives, we split the original caption which is a paragraph into multiple captions by using spaCy library and take the cartesan product leading to each sample as a separate image-text pair.
#### Compared to original FLAVA paper
The PMD dataset in this repo doesn't correspond 1:1 exactly to the original PMD dataset used in the [FLAVA](https://arxiv.org/abs/2112.04482) paper though this repo is built by the same authors. This is due to difficulty in reproducing WiT and YFCC100M subsets exactly. This repo in general contains more data than the PMD in the FLAVA paper and hence should probably result in better performance.
#### Who are the source language producers?
Please refer to the original dataset papers to understand where the content is coming from.
### Annotations
#### Annotation process
The dataset is a combination of existing public datasets with some filtering applied on top so there is no annotation process involved.
#### Who are the annotators?
Please refer to the original dataset papers to understand where the content is coming from.
### Personal and Sensitive Information
Please refer to the original dataset papers to understand where the content is coming from. For example, a detailed description on this for RedCaps can be found [here](https://huggingface.co/datasets/red_caps).
## Considerations for Using the Data
### Social Impact of Dataset
From the paper:
> **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g.,
a data protection impact analysis) been conducted?**
No.
### Discussion of Biases
Please refer to the original dataset papers to understand where the content is coming from. For example, a detailed description on this for RedCaps can be found [here](https://huggingface.co/datasets/red_caps).
### Other Known Limitations
From the paper:
> **Are there any errors, sources of noise, or redundancies in the dataset?**
PMD is noisy by design since image-text pairs on the internet are noisy and unstructured. Though, since it contains sources such as COCO, Visual Genome, and Localized Narratives which are hand-curated by annotators, it has a lot of well-aligned data as well. So, it is definitely more aligned compared to e.g. LAION.
Some instances may also have duplicate images and captions but should have almost no effect in training large-scale models.
> **Does the dataset contain data that might be considered confidential (e.g., data that is
protected by legal privilege or by doctor-patient confidentiality, data that includes the
content of individuals non-public communications)?**
Not that the authors know of. Please refer to the original dataset papers to understand where the content is coming from. For example, a detailed description on this for RedCaps can be found [here](https://huggingface.co/datasets/red_caps).
## Additional Information
### Dataset Curators
The authors of the original dataset papers, as well as the authors of the FLAVA paper (Amanpreet, Ronghang, Vedanuj, Guillaume, Wojciech, Marcus and Douwe).
### Licensing Information
Here are the individual licenses from each of the datasets that apply if you use this dataset:
#### COCO
The annotations in the COCO dataset belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.
The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.
#### Conceptual Captions
The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
#### WIT
This data is available under the [Creative Commons Attribution-ShareAlike 3.0 Unported](LICENSE) license.
#### Visual Genome
Visual Genome by Ranjay Krishna et al is licensed under a Creative Commons Attribution 4.0 International License.
#### Localized Narratives
All the annotations available through this website are released under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license. You are free to redistribute and modify the annotations, but we ask you to please keep the original attribution to our paper.
#### YFCC100M
Use of the original media files is subject to the Creative Commons licenses chosen by their creators/uploaders. License information for each media file can be found within [the YFCC100M metadata](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/#yfcc100m). Use of the dataset is subject to the relevant Webscope License Agreement, which you need to agree to if you use this dataset.
#### RedCaps
The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (https://www.reddit.com/wiki/
api-terms) and users must comply with Reddit User Agreeement, Content Policy,
and Privacy Policy – all accessible at https://www.redditinc.com/policies.
Similar to RedCaps:
> PMD should only be used for non-commercial research. PMD should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of PMD are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies.
### Citation Information
Please cite the main FLAVA paper in which PMD was introduced along with each of the subsets used in PMD as follows:
```bibtex
@inproceedings{singh2022flava,
title={Flava: A foundational language and vision alignment model},
author={Singh, Amanpreet and Hu, Ronghang and Goswami, Vedanuj and Couairon, Guillaume and Galuba, Wojciech and Rohrbach, Marcus and Kiela, Douwe},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={15638--15650},
year={2022}
}
@article{chen2015microsoft,
title={Microsoft coco captions: Data collection and evaluation server},
author={Chen, Xinlei and Fang, Hao and Lin, Tsung-Yi and Vedantam, Ramakrishna and Gupta, Saurabh and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
journal={arXiv preprint arXiv:1504.00325},
year={2015}
}
@inproceedings{ordonez2011sbucaptions,
Author = {Vicente Ordonez and Girish Kulkarni and Tamara L. Berg},
Title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
Booktitle = {Neural Information Processing Systems ({NIPS})},
Year = {2011},
}
@article{krishna2017visual,
title={Visual genome: Connecting language and vision using crowdsourced dense image annotations},
author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and others},
journal={International journal of computer vision},
volume={123},
number={1},
pages={32--73},
year={2017},
publisher={Springer}
}
@article{srinivasan2021wit,
title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},
author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},
journal={arXiv preprint arXiv:2103.01913},
year={2021}
}
@inproceedings{sharma2018conceptual,
title={Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning},
author={Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={2556--2565},
year={2018}
}
@inproceedings{changpinyo2021conceptual,
title={Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts},
author={Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3558--3568},
year={2021}
}
@inproceedings{ponttuset2020localized,
author = {Jordi Pont-Tuset and Jasper Uijlings and Soravit Changpinyo and Radu Soricut and Vittorio Ferrari},
title = {Connecting Vision and Language with Localized Narratives},
booktitle = {ECCV},
year = {2020}
}
@article{thomee2016yfcc100m,
title={YFCC100M: The new data in multimedia research},
author={Thomee, Bart and Shamma, David A and Friedland, Gerald and Elizalde, Benjamin and Ni, Karl and Poland, Douglas and Borth, Damian and Li, Li-Jia},
journal={Communications of the ACM},
volume={59},
number={2},
pages={64--73},
year={2016},
publisher={ACM New York, NY, USA}
}
@misc{desai2021redcaps,
title={RedCaps: web-curated image-text data created by the people, for the people},
author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson},
year={2021},
eprint={2111.11431},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@aps](https://github.com/apsdehal), [Thomas Wang](https://huggingface.co/TimeRobber), and [@VictorSanh](https://huggingface.co/VictorSanh) for adding this dataset. |
Hello-SimpleAI/HC3-Chinese | 2023-01-21T13:11:49.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"ChatGPT",
"SimpleAI",
"Detection",
"OOD",
"arxiv:2301.07597",
"region:us"
] | Hello-SimpleAI | Human ChatGPT Comparison Corpus (HC3) Chinese Version | \ | null | 101 | 83 | ---
task_categories:
- text-classification
- question-answering
- sentence-similarity
- zero-shot-classification
language:
- en
- zh
tags:
- ChatGPT
- SimpleAI
- Detection
- OOD
size_categories:
- 10K<n<100K
license: cc-by-sa-4.0
---
# Human ChatGPT Comparison Corpus (HC3)
We propose the first human-ChatGPT comparison corpus, named **HC3** dataset.
This dataset is introduced in our paper:
- Paper: [***How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection***](https://arxiv.org/abs/2301.07597)
Code, models and analysis are available on our GitHub:
- GitHub: [**Chatgpt-Comparison-Detection project** 🔬](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection)
# Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.
See [dataset copyright](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection#dataset-copyright).
# Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
}
``` |
IlyaGusev/librusec | 2023-03-20T16:03:43.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"region:us"
] | IlyaGusev | null | null | null | 4 | 83 | ---
dataset_info:
features:
- name: id
dtype: uint64
- name: text
dtype: string
splits:
- name: train
num_bytes: 125126513109
num_examples: 223256
download_size: 34905399148
dataset_size: 125126513109
task_categories:
- text-generation
language:
- ru
size_categories:
- 100K<n<1M
---
# Librusec dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
## Description
**Summary:** Based on http://panchenko.me/data/russe/librusec_fb2.plain.gz. Uploaded here for convenience. Additional cleaning was performed.
**Script:** [create_librusec.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_librusec.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** Russian.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/librusec', split="train", streaming=True)
for example in dataset:
print(example["text"])
``` |
sanagnos/processed_gpt_dataset_big | 2023-04-06T20:05:27.000Z | [
"region:us"
] | sanagnos | null | null | null | 0 | 83 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 23584245444.0
num_examples: 3831099
download_size: 6899066299
dataset_size: 23584245444.0
---
# Dataset Card for "processed_gpt_dataset_big"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
enesxgrahovac/the-feynman-lectures-on-physics | 2023-04-07T20:56:25.000Z | [
"region:us"
] | enesxgrahovac | null | null | null | 3 | 83 | ---
dataset_info:
features:
- name: book_volume
dtype: string
- name: book_title
dtype: string
- name: chapter_number
dtype: string
- name: chapter_title
dtype: string
- name: section_number
dtype: string
- name: section_title
dtype: string
- name: section_text
dtype: string
splits:
- name: train
num_bytes: 4609643
num_examples: 641
download_size: 2276758
dataset_size: 4609643
---
# Dataset Card for "the-feynman-lectures-on-physics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heegyu/OIG-small-chip2-ko | 2023-04-19T13:25:41.000Z | [
"size_categories:100K<n<1M",
"language:ko",
"language:en",
"license:apache-2.0",
"region:us"
] | heegyu | null | null | null | 7 | 83 | ---
license: apache-2.0
language:
- ko
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for "OIG-small-chip2-ko"
- 210282 items
- Original Dataset: OIG-small-chip2 dataset from https://laion.ai/blog/oig-dataset/
- Translated by Google Translate API
example
```
{
"user": "Is there a good way to clean up my credit report?\n\n",
"chip2": "That depends on why your credit score is low. Would you like to share more details about your situation?",
"index": 210272,
"user_translated": "내 신용 보고서를 정리하는 좋은 방법이 있습니까?\n\n",
"chip2_translated": "신용 점수가 낮은 이유에 따라 다릅니다. 귀하의 상황에 대해 더 자세히 알려주시겠습니까?"
}
``` |
Sp1786/multiclass-sentiment-analysis-dataset | 2023-06-25T08:01:27.000Z | [
"task_categories:text-classification",
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"code",
"region:us"
] | Sp1786 | null | null | null | 0 | 83 | ---
license: apache-2.0
task_categories:
- text-classification
- translation
language:
- en
tags:
- code
pretty_name: multiclass-sentiment-analysis-dataset
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Dewa/Dog_Emotion_Dataset_v2 | 2023-07-28T18:47:48.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:creativeml-openrail-m",
"region:us"
] | Dewa | null | null | null | 1 | 83 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: label
dtype: int64
- name: emotion
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 128018890.4
num_examples: 3200
- name: test
num_bytes: 31722930.4
num_examples: 800
download_size: 162369679
dataset_size: 159741820.8
license: creativeml-openrail-m
task_categories:
- image-classification
size_categories:
- 1K<n<10K
---
# Dataset Card for "Dog_Emotion_Dataset_v2"
- The Dataset is based on a `kaggle` dataset
# Label and its Meaning
- `0 : sad"`
- `1 : angry"`
- `2 : relaxed"`
- `3 : happy"` |
adalbertojunior/ICD_dataset | 2023-09-13T21:59:45.000Z | [
"region:us"
] | adalbertojunior | null | null | null | 0 | 83 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: label
sequence: string
splits:
- name: train
num_bytes: 418410601
num_examples: 39354
- name: test
num_bytes: 53529100
num_examples: 5000
- name: validation
num_bytes: 52947510
num_examples: 5000
download_size: 301971173
dataset_size: 524887211
---
# Dataset Card for "ICD_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/eval_tag_squad_v0 | 2023-09-21T15:52:30.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 83 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 87035544
num_examples: 87599
- name: validation
num_bytes: 11397371
num_examples: 10570
download_size: 21419187
dataset_size: 98432915
---
# Dataset Card for "eval_tag_squad_v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LeStoe11/geeks4geeks_fixed | 2023-09-27T18:21:59.000Z | [
"region:us"
] | LeStoe11 | null | null | null | 0 | 83 | Entry not found |
chirunder/tictactoe_images | 2023-09-27T16:13:15.000Z | [
"region:us"
] | chirunder | null | null | null | 0 | 83 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: winner
dtype: string
- name: is_valid
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 72647038.12396485
num_examples: 17714
- name: test
num_bytes: 8075082.876035158
num_examples: 1969
download_size: 68793496
dataset_size: 80722121.0
---
# Dataset Card for "tictactoe_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
veezbo/akkadian_english_corpus | 2023-09-30T21:32:28.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | veezbo | null | null | null | 1 | 83 | ---
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: English-translated Akkadian Corpus
size_categories:
- 1K<n<10K
---
# Akkadian English Corpus
This dataset is a cleaned English-translated Akkadian language dataset. This dataset can and has been used for text generation tasks, for example to fine-tune LLMs.
## How it was generated
Please visit my [repo](https://github.com/veezbo/akkadian_english_corpus) on Github which explains the steps that were taken to prepare this dataset for a text generation task.
At a high level, these are steps that were taken:
- Sourced a high-quality dataset of English-translated Akkadian by experts
- Enforced a minimum line length
- Removed duplicate lines
- Removed textual notes and other generic notes within parantheses
- Inserted translation notes and literal notes in place (preserving grammar and adding clarity to the corpus)
## Credit
Credit for the aggregation of the raw data belongs to the [Akkademia](https://github.com/gaigutherz/Akkademia/tree/master) project. Specifically, the exact data file used as the starting dataset is linked [here](https://github.com/gaigutherz/Akkademia/blob/master/NMT_input/train.en) and was also used to train their SOTA neural machine translation Akkadian->English model as described in their recent [paper](https://academic.oup.com/pnasnexus/article/2/5/pgad096/7147349) Gutherz et al. 2023 [1].
Credit for the original source of the raw data belongs to the incredible Open Richly Annotated Cuneiform Corpus ([ORACC](http://oracc.org)) project [2]. Specifically, as noted by the Akkademia project above, the RINAP 1, 3, 4, and 5 datasets are the source of the original raw data.
## Citations
[1] Gai Gutherz, Shai Gordin, Luis Sáenz, Omer Levy, Jonathan Berant, Translating Akkadian to English with neural machine translation, PNAS Nexus, Volume 2, Issue 5, May 2023, pgad096, https://doi.org/10.1093/pnasnexus/pgad096
[2] Jamie Novotny, Eleanor Robson, Steve Tinney, Niek Veldhuis, et al. Open Richly Annotated Cuneiform Corpus, http://oracc.org |
kor_hate | 2023-01-25T14:33:47.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"arxiv:2005.12503",
"region:us"
] | null | Human-annotated Korean corpus collected from a popular domestic entertainment news aggregation platform
for toxic speech detection. Comments are annotated for gender bias, social bias and hate speech. | @inproceedings{moon-etal-2020-beep,
title = "{BEEP}! {K}orean Corpus of Online News Comments for Toxic Speech Detection",
author = "Moon, Jihyung and
Cho, Won Ik and
Lee, Junbum",
booktitle = "Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.socialnlp-1.4",
pages = "25--31",
abstract = "Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff{'}s alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks.",
} | null | 3 | 82 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
paperswithcode_id: korean-hatespeech-dataset
pretty_name: Korean HateSpeech Dataset
dataset_info:
features:
- name: comments
dtype: string
- name: contain_gender_bias
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: bias
dtype:
class_label:
names:
'0': none
'1': gender
'2': others
- name: hate
dtype:
class_label:
names:
'0': hate
'1': offensive
'2': none
splits:
- name: train
num_bytes: 983608
num_examples: 7896
- name: test
num_bytes: 58913
num_examples: 471
download_size: 968449
dataset_size: 1042521
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
- **Repository:** [Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
- **Paper:** [BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection](https://arxiv.org/abs/2005.12503)
- **Point of Contact:** [Steven Liu](stevhliu@gmail.com)
### Dataset Summary
The Korean HateSpeech Dataset is a dataset of 8367 human-labeled entertainment news comments from a popular Korean news aggregation platform. Each comment was evaluated for either social bias (labels: `gender`, `others` `none`), hate speech (labels: `hate`, `offensive`, `none`) or gender bias (labels: `True`, `False`). The dataset was created to support the identification of toxic comments on online platforms where users can remain anonymous.
### Supported Tasks and Leaderboards
* `multi-label classification`: The dataset can be used to train a model for hate speech detection. A BERT model can be presented with a Korean entertainment news comment and be asked to label whether it contains social bias, gender bias and hate speech. Users can participate in a Kaggle leaderboard [here](https://www.kaggle.com/c/korean-hate-speech-detection/overview).
### Languages
The text in the dataset is in Korean and the associated is BCP-47 code is `ko-KR`.
## Dataset Structure
### Data Instances
An example data instance contains a `comments` containing the text of the news comment and then labels for each of the following fields: `contain_gender_bias`, `bias` and `hate`.
```python
{'comments':'설마 ㅈ 현정 작가 아니지??'
'contain_gender_bias': 'True',
'bias': 'gender',
'hate': 'hate'
}
```
### Data Fields
* `comments`: text from the Korean news comment
* `contain_gender_bias`: a binary `True`/`False` label for the presence of gender bias
* `bias`: determines the type of social bias, which can be:
* `gender`: if the text includes bias for gender role, sexual orientation, sexual identity, and any thoughts on gender-related acts
* `others`: other kinds of factors that are considered not gender-related but social bias, including race, background, nationality, ethnic group, political stance, skin color, religion, handicaps, age, appearance, richness, occupations, the absence of military service experience
* `none`: a comment that does not incorporate the bias
* `hate`: determines how aggressive the comment is, which can be:
* `hate`: if the text is defined as an expression that display aggressive stances towards individuals/groups with certain characteristics (gender role, sexual orientation, sexual identity, any thoughts on gender-related acts, race, background, nationality, ethnic group, political stance, skin color, religion, handicaps, age, appearance, richness, occupations, the absence of military service experience, etc.)
* `offensive`: if the text contains rude or aggressive contents, can emit sarcasm through rhetorical question or irony, encompass an unethical expression or conveys unidentified rumors
* `none`: a comment that does not incorporate hate
### Data Splits
The data is split into a training and development (test) set. It contains 8371 annotated comments that are split into 7896 comments in the training set and 471 comments in the test set.
## Dataset Creation
### Curation Rationale
The dataset was created to provide the first human-labeled Korean corpus for toxic speech detection from a Korean online entertainment news aggregator. Recently, two young Korean celebrities suffered from a series of tragic incidents that led to two major Korean web portals to close the comments section on their platform. However, this only serves as a temporary solution, and the fundamental issue has not been solved yet. This dataset hopes to improve Korean hate speech detection.
### Source Data
#### Initial Data Collection and Normalization
A total of 10.4 million comments were collected from an online Korean entertainment news aggregator between Jan. 1, 2018 and Feb. 29, 2020. 1,580 articles were drawn using stratified sampling and the top 20 comments were extracted ranked in order of their Wilson score on the downvote for each article. Duplicate comments, single token comments and comments with more than 100 characters were removed (because they could convey various opinions). From here, 10K comments were randomly chosen for annotation.
#### Who are the source language producers?
The language producers are users of the Korean online news platform between 2018 and 2020.
### Annotations
#### Annotation process
Each comment was assigned to three random annotators to assign a majority decision. For more ambiguous comments, annotators were allowed to skip the comment. See Appendix A in the [paper](https://arxiv.org/pdf/2005.12503.pdf) for more detailed guidelines.
#### Who are the annotators?
Annotation was performed by 32 annotators, consisting of 29 annotators from the crowdsourcing platform DeepNatural AI and three NLP researchers.
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to tackle the social issue of users creating toxic comments on online platforms. This dataset aims to improve detection of toxic comments online.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset is curated by Jihyung Moon, Won Ik Cho and Junbum Lee.
### Licensing Information
[N/A]
### Citation Information
```
@inproceedings
{moon-et-al-2020-beep
title = "{BEEP}! {K}orean Corpus of Online News Comments for Toxic Speech Detection",
author = "Moon, Jihyung and
Cho, Won Ik and
Lee, Junbum",
booktitle = "Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.socialnlp-1.4",
pages = "25--31",
abstract = "Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff{'}s alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks.",
}
```
### Contributions
Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset. |
AI-Sweden/SuperLim | 2022-10-21T15:25:24.000Z | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:other",
"multilinguality:monolingual",
"language:sv",
"region:us"
] | AI-Sweden | \ | \ | null | 2 | 82 | ---
language:
- sv
multilinguality:
- monolingual
pretty_name: SuperLim
task_categories:
- question-answering
- text-classification
- sequence-modeling
- other
---
# Dataset Card for SuperLim
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Structure/Creation/Use/Additional Information](#dataset-structurecreationuseadditional-information)
- [Dalaj](#dalaj)
- [SweAna](#sweana)
- [SweDiag](#swediag)
- [SweFaq](#swefaq)
- [SweFracas](#swefracas)
- [SwePar](#swepar)
- [SweSat](#swesat)
- [SweSim](#swesim)
- [SweWgr](#swewgr)
- [SweWic](#swewic)
- [SweWsc](#swewsc)
## Dataset Description
- **Homepage:** [Språkbanken](https://spraakbanken.gu.se/en/resources/superlim)
- **Repository:** /
- **Paper:** /
- **Leaderboard:** /
- **Point of Contact:** [Contact Us](mailto:severine.verlinden@ai.se)
### Dataset Summary
A standardized suite for evaluation and analysis of Swedish natural language understanding systems.
### Supported Tasks and Leaderboards
Work in progress
### Languages
Swedish
## Dataset Structure/Creation/Use/Additional Information
### Dalaj
[dataset documentation](https://svn.spraakdata.gu.se/sb-arkiv/pub/dalaj/dalaj_documentation.tsv)
### SweAna
[dataset documentation](https://svn.spraakdata.gu.se/sb-arkiv/pub/swedish_analogy/analogy_documentation_sheet.tsv)
#### SweDiag
work in progress
### SweFaq
[dataset documentation](https://svn.spraakdata.gu.se/sb-arkiv/pub/faq/faq_documentation_sheet.tsv)
### SweFracas
[dataset documentation](https://svn.spraakdata.gu.se/sb-arkiv/pub/swefracas/swefracas_documentation_sheet.tsv)
### SwePar
[dataset documentation](https://svn.spraakdata.gu.se/sb-arkiv/pub/sweparaphrase/sweparaphrase_documentation.tsv)
### SweSat
[dataset documentation](https://svn.spraakdata.gu.se/sb-arkiv/pub/swesat/swesat-synonyms_documentation_sheet.tsv)
### SweSim
[dataset documentation](https://demo.spraakbanken.gu.se/gerlof/SuperSim/supersim-superlim_documentation_sheet.txt)
### SweWgr
[dataset documentation](https://demo.spraakbanken.gu.se/gerlof/SweWinogender/swewinogender_documentation_sheet.txt)
### SweWic
[dataset documentation](https://demo.spraakbanken.gu.se/gerlof/SweWiC/swewic_documentation_sheet.txt)
### SweWsc
[dataset documentation](https://demo.spraakbanken.gu.se/gerlof/SweWinograd/swewinograd_documentation_sheet.txt)
|
tau/mrqa | 2022-03-21T19:26:55.000Z | [
"region:us"
] | tau | The MRQA 2019 Shared Task focuses on generalization in question answering.
An effective question answering system should do more than merely
interpolate from the training set to answer test examples drawn
from the same distribution: it should also be able to extrapolate
to out-of-distribution examples — a significantly harder challenge.
The dataset is a collection of 18 existing QA dataset (carefully selected
subset of them) and converted to the same format (SQuAD format). Among
these 18 datasets, six datasets were made available for training,
six datasets were made available for development, and the final six
for testing. The dataset is released as part of the MRQA 2019 Shared Task. | @inproceedings{fisch2019mrqa,
title={{MRQA} 2019 Shared Task: Evaluating Generalization in Reading Comprehension},
author={Adam Fisch and Alon Talmor and Robin Jia and Minjoon Seo and Eunsol Choi and Danqi Chen},
booktitle={Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP},
year={2019},
} | null | 0 | 82 | Entry not found |
nlpaueb/multi_eurlex | 2022-10-25T10:29:13.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|multi_eurlex",
"language:en",
"language:de",
"language:fr",
"language:el",
"language:sk",
"license:cc-by-sa-4.0",
"region:us"
] | nlpaueb | An non-parallel version of the MultiEURLEX datasets released by Chalkidis et al. (2021).
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource).
Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU.
As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels);
this is multi-label classification task (given the text, predict multiple labels).
In this version, MultiEURLEX comprises non-parallel documents across 5 languages (English, German, French, Greek,
and Slovakian) including translations from English to the rest of the 4 available languages. | @InProceedings{xenouleas-etal-2022-realistic-multieurlex,
author = {Xenouleas, Stratos
and Tsoukara, Alexia
and Panagiotakis, Giannis
and Chalkidis, Ilias
and Androutsopoulos, Ion},
title = {Realistic Zero-Shot Cross-Lingual Transfer in Legal Topic Classification},
booktitle = {Proceedings of 12th Hellenic Conference on Artificial Intelligence (SETN 2022)},
year = {2022},
publisher = {Association for Computer Machinery},
location = {Corfu, Greece},
} | null | 4 | 82 | ---
pretty_name: Non-Parallel MultiEURLEX (incl. Translations)
annotations_creators:
- found
language_creators:
- found
- machine-generated
language:
- en
- de
- fr
- el
- sk
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|multi_eurlex
task_categories:
- text-classification
task_ids:
- multi-label-classification
- topic-classification
---
# Dataset Card for "Non-Parallel MultiEURLEX (incl. Translations)"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/nlpaueb/multi-eurlex/tree/realistic-zero-shot
- **Repository:** https://github.com/nlpaueb/multi-eurlex/tree/realistic-zero-shot
- **Paper:** TBA
- **Leaderboard:** N/A
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
**Documents**
MultiEURLEX of Chalkidis et al. (2021) comprises 65k EU laws in 23 official EU languages. Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. Each EUROVOC label ID is associated with a *label descriptor*, e.g., [60, agri-foodstuffs], [6006, plant product], [1115, fruit]. The descriptors are also available in the 23 languages. Chalkidis et al. (2019) published a monolingual (English) version of this dataset, called EUR-LEX, comprising 57k EU laws with the originally assigned gold labels.
In this new version, dubbed "Non-Parallel MultiEURLEX (incl. Translations)", MultiEURLEX comprises non-parallel documents across 5 languages (English, German, French, Greek, and Slovak), i.e., 11,000 different documents per language, including also translations from English to the rest of the 4 available languages.
### Supported Tasks and Leaderboards
MultiEURLEX can be used for legal topic classification, a multi-label classification task where legal documents need to be assigned concepts (in our case, from EUROVOC) reflecting their topics. Unlike EUR-LEX, however, MultiEURLEX supports labels from three different granularities (EUROVOC levels). More importantly, apart from monolingual (*one-to-one*) experiments, it can be used to study cross-lingual transfer scenarios, including *one-to-many* (systems trained in one language and used in other languages with no training data), and *many-to-one* or *many-to-many* (systems jointly trained in multiple languages and used in one or more other languages).
The dataset is not yet part of an established benchmark.
### Languages
The EU has 24 official languages. When new members join the EU, the set of official languages usually expands, except the languages are already included. MultiEURLEX covers 23 languages from seven language families (Germanic, Romance, Slavic, Uralic, Baltic, Semitic, Hellenic). EU laws are published in all official languages, except Irish, for resource-related reasons (Read more at https://europa.eu/european-union/about-eu/eu-languages_en). This wide coverage makes MultiEURLEX a valuable testbed for cross-lingual transfer. All languages use the Latin script, except for Bulgarian (Cyrillic script) and Greek. Several other languages are also spoken in EU countries. The EU is home to over 60 additional indigenous regional or minority languages, e.g., Basque, Catalan, Frisian, Saami, and Yiddish, among others, spoken by approx. 40 million people, but these additional languages are not considered official (in terms of EU), and EU laws are not translated to them.
This version of MultiEURLEX covers 5 EU languages (English, German, French, Greek, and Slovak). It also includes machine-translated versions of the documents using the EasyNMT framework (https://github.com/UKPLab/EasyNMT) utilizing the many-to-many M2M_100_418M model of Fan et al. (2020) for el-to-en and el-to-de pairs and the OPUS-MT (Tiedemann et al., 2020) models for the rest.
## Dataset Structure
### Data Instances
**Multilingual use of the dataset**
When the dataset is used in a multilingual setting selecting the the 'all_languages' flag:
```python
from datasets import load_dataset
dataset = load_dataset('nlpaueb/multi_eurlex', 'all_languages')
```
```json
{
"celex_id": "31979D0509",
"text": {"en": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
"en2fr": "DU CONSEIL du 24 mai 1979 concernant l'aide financiere de la Communaute e l'eradication de la peste porcine africaine en Espagne (79/509/CEE)\nLE CONSEIL DES COMMUNAUTAS EUROPENNES ...",
"en2de": "...",
"en2el": "...",
"en2sk": "..."
},
"labels": [
1,
13,
47
]
}
```
**Monolingual use of the dataset**
When the dataset is used in a monolingual setting selecting the ISO language code for one of the 5 supported languages, or supported translation pairs in the form src2trg, where src and trg are ISO language codes, e.g., en2fr for English translated to French. For example:
```python
from datasets import load_dataset
dataset = load_dataset('nlpaueb/multi_eurlex', 'en2fr')
```
```json
{
"celex_id": "31979D0509",
"text": "DU CONSEIL du 24 mai 1979 concernant l'aide financiere de la Communaute e l'eradication de la peste porcine africaine en Espagne (79/509/CEE)\nLE CONSEIL DES COMMUNAUTAS EUROPENNES ...",
"labels": [
1,
13,
47
]
}
```
### Data Fields
**Multilingual use of the dataset**
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`text`: (dict[**str**]) A dictionary with the 23 languages as keys and the full content of each document as values.\
`labels`: (**List[int]**) The relevant EUROVOC concepts (labels).
**Monolingual use of the dataset**
The following data fields are provided for documents (`train`, `dev`, `test`):
`celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
`text`: (**str**) The full content of each document across languages.\
`labels`: (**List[int]**) The relevant EUROVOC concepts (labels).
If you want to use the descriptors of the EUROVOC concepts, similar to [Chalkidis et al. (2020)](https://aclanthology.org/2020.emnlp-main.607/), please download the relevant JSON file [here](https://raw.githubusercontent.com/nlpaueb/multi-eurlex/master/data/eurovoc_descriptors.json).
Then you may load it and use it:
```python
import json
from datasets import load_dataset
# Load the English part of the dataset
dataset = load_dataset('nlpaueb/multi_eurlex', 'en', split='train')
# Load (label_id, descriptor) mapping
with open('./eurovoc_descriptors.json') as jsonl_file:
eurovoc_concepts = json.load(jsonl_file)
# Get feature map info
classlabel = dataset.features["labels"].feature
# Retrieve IDs and descriptors from dataset
for sample in dataset:
print(f'DOCUMENT: {sample["celex_id"]}')
# DOCUMENT: 32006D0213
for label_id in sample['labels']:
print(f'LABEL: id:{label_id}, eurovoc_id: {classlabel.int2str(label_id)}, \
eurovoc_desc:{eurovoc_concepts[classlabel.int2str(label_id)]}')
# LABEL: id: 1, eurovoc_id: '100160', eurovoc_desc: 'industry'
```
### Data Splits
<table>
<tr><td> Language </td> <td> ISO code </td> <td> Member Countries where official </td> <td> EU Speakers [1] </td> <td> Number of Documents [2] </td> </tr>
<tr><td> English </td> <td> <b>en</b> </td> <td> United Kingdom (1973-2020), Ireland (1973), Malta (2004) </td> <td> 13/ 51% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> German </td> <td> <b>de</b> </td> <td> Germany (1958), Belgium (1958), Luxembourg (1958) </td> <td> 16/32% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> French </td> <td> <b>fr</b> </td> <td> France (1958), Belgium(1958), Luxembourg (1958) </td> <td> 12/26% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> Greek </td> <td> <b>el</b> </td> <td> Greece (1981), Cyprus (2008) </td> <td> 3/4% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
<tr><td> Slovak </td> <td> <b>sk</b> </td> <td> Slovakia (2004) </td> <td> 1/1% </td> <td> 11,000 / 1,000 / 5,000 </td> </tr>
</table>
[1] Native and Total EU speakers percentage (%) \
[2] Training / Development / Test Splits
## Dataset Creation
### Curation Rationale
The original dataset was curated by Chalkidis et al. (2021).\
The new version of the dataset was curated by Xenouleas et al. (2022).\
The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
### Source Data
#### Initial Data Collection and Normalization
The original data are available at the EUR-LEX portal (https://eur-lex.europa.eu) in unprocessed formats (HTML, XML, RDF). The documents were downloaded from the EUR-LEX portal in HTML. The relevant EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
Chalkidis et al. (2021) stripped HTML mark-up to provide the documents in plain text format and inferred the labels for EUROVOC levels 1--3, by backtracking the EUROVOC hierarchy branches, from the originally assigned labels to their ancestors in levels 1--3, respectively.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/). EUROVOC has eight levels of concepts. Each document is assigned one or more concepts (labels). If a document is assigned a concept, the ancestors and descendants of that concept are typically not assigned to the same document. The documents were originally annotated with concepts from levels 3 to 8.
Chalkidis et al. (2021)augmented the annotation with three alternative sets of labels per document, replacing each assigned concept by its ancestor from level 1, 2, or 3, respectively.
Thus, Chalkidis et al. (2021) provide four sets of gold labels per document, one for each of the first three levels of the hierarchy, plus the original sparse label assignment.Levels 4 to 8 cannot be used independently, as many documents have gold concepts from the third level; thus many documents will be mislabeled, if we discard level 3.
#### Who are the annotators?
Publications Office of EU (https://publications.europa.eu/en)
### Personal and Sensitive Information
The dataset contains publicly available EU laws that do not include personal or sensitive information with the exception of trivial information presented by consent, e.g., the names of the current presidents of the European Parliament and European Council, and other administration bodies.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Xenouleas et al. (2021)
### Licensing Information
We provide MultiEURLEX with the same licensing as the original EU data (CC-BY-4.0):
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
### Citation Information
*Stratos Xenouleas, Alexia Tsoukara, Giannis Panagiotakis Ilias Chalkidis, and Ion Androutsopoulos.*
*Realistic Zero-Shot Cross-Lingual Transfer in Legal Topic Classification.*
*Proceedings of 12th Hellenic Conference on Artificial Intelligence (SETN 2022). Corfu, Greece. 2022*
```
@InProceedings{xenouleas-etal-2022-realistic-multieurlex,
author = {Xenouleas, Stratos
and Tsoukara, Alexia
and Panagiotakis, Giannis
and Chalkidis, Ilias
and Androutsopoulos, Ion},
title = {Realistic Zero-Shot Cross-Lingual Transfer in Legal Topic Classification},
booktitle = {Proceedings of 12th Hellenic Conference on Artificial Intelligence (SETN 2022)},
year = {2022},
publisher = {Association for Computer Machinery},
location = {Corfu, Greece},
}
```
### Contributions
Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset. |
TheGreatRambler/mm2_level | 2022-11-11T08:07:34.000Z | [
"task_categories:other",
"task_categories:object-detection",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_categories:text-generation",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:multilingual",
"license:cc-by-nc-sa-4.0",
"text-mining",
"region:us"
] | TheGreatRambler | null | null | null | 5 | 82 | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 levels
tags:
- text-mining
---
# Mario Maker 2 levels
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 levels dataset consists of 26.6 million levels from Nintendo's online service totaling around 100GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 levels dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_level", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'data_id': 3000004,
'name': 'カベキック',
'description': 'カベキックをとにかくするコースです。',
'uploaded': 1561644329,
'created': 1561674240,
'gamestyle': 4,
'theme': 0,
'difficulty': 0,
'tag1': 7,
'tag2': 10,
'game_version': 1,
'world_record': 8049,
'upload_time': 193540,
'upload_attempts': 1,
'num_comments': 60,
'clear_condition': 0,
'clear_condition_magnitude': 0,
'timer': 300,
'autoscroll_speed': 0,
'clears': 1646,
'attempts': 3168,
'clear_rate': 51.957070707070706,
'plays': 1704,
'versus_matches': 80,
'coop_matches': 27,
'likes': 152,
'boos': 118,
'unique_players_and_versus': 1391,
'weekly_likes': 0,
'weekly_plays': 1,
'uploader_pid': '5218390885570355093',
'first_completer_pid': '16824392528839047213',
'record_holder_pid': '5411258160547085075',
'level_data': [some binary data],
'unk2': 0,
'unk3': [some binary data],
'unk9': 3,
'unk10': 4,
'unk11': 1,
'unk12': 1
}
```
Level data is a binary blob describing the actual level and is equivalent to the level format Nintendo uses in-game. It is gzip compressed and needs to be decompressed to be read. To read it you only need to use the provided `level.ksy` kaitai struct file and install the kaitai struct runtime to parse it into an object:
```python
from datasets import load_dataset
from kaitaistruct import KaitaiStream
from io import BytesIO
from level import Level
import zlib
ds = load_dataset("TheGreatRambler/mm2_level", streaming=True, split="train")
level_data = next(iter(ds))["level_data"]
level = Level(KaitaiStream(BytesIO(zlib.decompress(level_data))))
# NOTE level.overworld.objects is a fixed size (limitation of Kaitai struct)
# must iterate by object_count or null objects will be included
for i in range(level.overworld.object_count):
obj = level.overworld.objects[i]
print("X: %d Y: %d ID: %s" % (obj.x, obj.y, obj.id))
#OUTPUT:
X: 1200 Y: 400 ID: ObjId.block
X: 1360 Y: 400 ID: ObjId.block
X: 1360 Y: 240 ID: ObjId.block
X: 1520 Y: 240 ID: ObjId.block
X: 1680 Y: 240 ID: ObjId.block
X: 1680 Y: 400 ID: ObjId.block
X: 1840 Y: 400 ID: ObjId.block
X: 2000 Y: 400 ID: ObjId.block
X: 2160 Y: 400 ID: ObjId.block
X: 2320 Y: 400 ID: ObjId.block
X: 2480 Y: 560 ID: ObjId.block
X: 2480 Y: 720 ID: ObjId.block
X: 2480 Y: 880 ID: ObjId.block
X: 2160 Y: 880 ID: ObjId.block
```
Rendering the level data into an image can be done using [Toost](https://github.com/TheGreatRambler/toost) if desired.
You can also download the full dataset. Note that this will download ~100GB:
```python
ds = load_dataset("TheGreatRambler/mm2_level", split="train")
```
## Data Structure
### Data Instances
```python
{
'data_id': 3000004,
'name': 'カベキック',
'description': 'カベキックをとにかくするコースです。',
'uploaded': 1561644329,
'created': 1561674240,
'gamestyle': 4,
'theme': 0,
'difficulty': 0,
'tag1': 7,
'tag2': 10,
'game_version': 1,
'world_record': 8049,
'upload_time': 193540,
'upload_attempts': 1,
'num_comments': 60,
'clear_condition': 0,
'clear_condition_magnitude': 0,
'timer': 300,
'autoscroll_speed': 0,
'clears': 1646,
'attempts': 3168,
'clear_rate': 51.957070707070706,
'plays': 1704,
'versus_matches': 80,
'coop_matches': 27,
'likes': 152,
'boos': 118,
'unique_players_and_versus': 1391,
'weekly_likes': 0,
'weekly_plays': 1,
'uploader_pid': '5218390885570355093',
'first_completer_pid': '16824392528839047213',
'record_holder_pid': '5411258160547085075',
'level_data': [some binary data],
'unk2': 0,
'unk3': [some binary data],
'unk9': 3,
'unk10': 4,
'unk11': 1,
'unk12': 1
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|data_id|int|Data IDs are unique identifiers, gaps in the table are due to levels deleted by users or Nintendo|
|name|string|Course name|
|description|string|Course description|
|uploaded|int|UTC timestamp for when the level was uploaded|
|created|int|Local timestamp for when the level was created|
|gamestyle|int|Gamestyle, enum below|
|theme|int|Theme, enum below|
|difficulty|int|Difficulty, enum below|
|tag1|int|The first tag, if it exists, enum below|
|tag2|int|The second tag, if it exists, enum below|
|game_version|int|The version of the game this level was made on|
|world_record|int|The world record in milliseconds|
|upload_time|int|The upload time in milliseconds|
|upload_attempts|int|The number of attempts it took the uploader to upload|
|num_comments|int|Number of comments, may not reflect the archived comments if there were more than 1000 comments|
|clear_condition|int|Clear condition, enum below|
|clear_condition_magnitude|int|If applicable, the magnitude of the clear condition|
|timer|int|The timer of the level|
|autoscroll_speed|int|A unit of how fast the configured autoscroll speed is for the level|
|clears|int|Course clears|
|attempts|int|Course attempts|
|clear_rate|float|Course clear rate as a float between 0 and 1|
|plays|int|Course plays, or "footprints"|
|versus_matches|int|Course versus matches|
|coop_matches|int|Course coop matches|
|likes|int|Course likes|
|boos|int|Course boos|
|unique_players_and_versus|int|All unique players that have ever played this level, including the number of versus matches|
|weekly_likes|int|The weekly likes on this course|
|weekly_plays|int|The weekly plays on this course|
|uploader_pid|string|The player ID of the uploader|
|first_completer_pid|string|The player ID of the user who first cleared this course|
|record_holder_pid|string|The player ID of the user who held the world record at time of archival |
|level_data|bytes|The GZIP compressed decrypted level data, kaitai struct file is provided for reading|
|unk2|int|Unknown|
|unk3|bytes|Unknown|
|unk9|int|Unknown|
|unk10|int|Unknown|
|unk11|int|Unknown|
|unk12|int|Unknown|
### Data Splits
The dataset only contains a train split.
## Enums
The dataset contains some enum integer fields. This can be used to convert back to their string equivalents:
```python
GameStyles = {
0: "SMB1",
1: "SMB3",
2: "SMW",
3: "NSMBU",
4: "SM3DW"
}
Difficulties = {
0: "Easy",
1: "Normal",
2: "Expert",
3: "Super expert"
}
CourseThemes = {
0: "Overworld",
1: "Underground",
2: "Castle",
3: "Airship",
4: "Underwater",
5: "Ghost house",
6: "Snow",
7: "Desert",
8: "Sky",
9: "Forest"
}
TagNames = {
0: "None",
1: "Standard",
2: "Puzzle solving",
3: "Speedrun",
4: "Autoscroll",
5: "Auto mario",
6: "Short and sweet",
7: "Multiplayer versus",
8: "Themed",
9: "Music",
10: "Art",
11: "Technical",
12: "Shooter",
13: "Boss battle",
14: "Single player",
15: "Link"
}
ClearConditions = {
137525990: "Reach the goal without landing after leaving the ground.",
199585683: "Reach the goal after defeating at least/all (n) Mechakoopa(s).",
272349836: "Reach the goal after defeating at least/all (n) Cheep Cheep(s).",
375673178: "Reach the goal without taking damage.",
426197923: "Reach the goal as Boomerang Mario.",
436833616: "Reach the goal while wearing a Shoe.",
713979835: "Reach the goal as Fire Mario.",
744927294: "Reach the goal as Frog Mario.",
751004331: "Reach the goal after defeating at least/all (n) Larry(s).",
900050759: "Reach the goal as Raccoon Mario.",
947659466: "Reach the goal after defeating at least/all (n) Blooper(s).",
976173462: "Reach the goal as Propeller Mario.",
994686866: "Reach the goal while wearing a Propeller Box.",
998904081: "Reach the goal after defeating at least/all (n) Spike(s).",
1008094897: "Reach the goal after defeating at least/all (n) Boom Boom(s).",
1051433633: "Reach the goal while holding a Koopa Shell.",
1061233896: "Reach the goal after defeating at least/all (n) Porcupuffer(s).",
1062253843: "Reach the goal after defeating at least/all (n) Charvaargh(s).",
1079889509: "Reach the goal after defeating at least/all (n) Bullet Bill(s).",
1080535886: "Reach the goal after defeating at least/all (n) Bully/Bullies.",
1151250770: "Reach the goal while wearing a Goomba Mask.",
1182464856: "Reach the goal after defeating at least/all (n) Hop-Chops.",
1219761531: "Reach the goal while holding a Red POW Block. OR Reach the goal after activating at least/all (n) Red POW Block(s).",
1221661152: "Reach the goal after defeating at least/all (n) Bob-omb(s).",
1259427138: "Reach the goal after defeating at least/all (n) Spiny/Spinies.",
1268255615: "Reach the goal after defeating at least/all (n) Bowser(s)/Meowser(s).",
1279580818: "Reach the goal after defeating at least/all (n) Ant Trooper(s).",
1283945123: "Reach the goal on a Lakitu's Cloud.",
1344044032: "Reach the goal after defeating at least/all (n) Boo(s).",
1425973877: "Reach the goal after defeating at least/all (n) Roy(s).",
1429902736: "Reach the goal while holding a Trampoline.",
1431944825: "Reach the goal after defeating at least/all (n) Morton(s).",
1446467058: "Reach the goal after defeating at least/all (n) Fish Bone(s).",
1510495760: "Reach the goal after defeating at least/all (n) Monty Mole(s).",
1656179347: "Reach the goal after picking up at least/all (n) 1-Up Mushroom(s).",
1665820273: "Reach the goal after defeating at least/all (n) Hammer Bro(s.).",
1676924210: "Reach the goal after hitting at least/all (n) P Switch(es). OR Reach the goal while holding a P Switch.",
1715960804: "Reach the goal after activating at least/all (n) POW Block(s). OR Reach the goal while holding a POW Block.",
1724036958: "Reach the goal after defeating at least/all (n) Angry Sun(s).",
1730095541: "Reach the goal after defeating at least/all (n) Pokey(s).",
1780278293: "Reach the goal as Superball Mario.",
1839897151: "Reach the goal after defeating at least/all (n) Pom Pom(s).",
1969299694: "Reach the goal after defeating at least/all (n) Peepa(s).",
2035052211: "Reach the goal after defeating at least/all (n) Lakitu(s).",
2038503215: "Reach the goal after defeating at least/all (n) Lemmy(s).",
2048033177: "Reach the goal after defeating at least/all (n) Lava Bubble(s).",
2076496776: "Reach the goal while wearing a Bullet Bill Mask.",
2089161429: "Reach the goal as Big Mario.",
2111528319: "Reach the goal as Cat Mario.",
2131209407: "Reach the goal after defeating at least/all (n) Goomba(s)/Galoomba(s).",
2139645066: "Reach the goal after defeating at least/all (n) Thwomp(s).",
2259346429: "Reach the goal after defeating at least/all (n) Iggy(s).",
2549654281: "Reach the goal while wearing a Dry Bones Shell.",
2694559007: "Reach the goal after defeating at least/all (n) Sledge Bro(s.).",
2746139466: "Reach the goal after defeating at least/all (n) Rocky Wrench(es).",
2749601092: "Reach the goal after grabbing at least/all (n) 50-Coin(s).",
2855236681: "Reach the goal as Flying Squirrel Mario.",
3036298571: "Reach the goal as Buzzy Mario.",
3074433106: "Reach the goal as Builder Mario.",
3146932243: "Reach the goal as Cape Mario.",
3174413484: "Reach the goal after defeating at least/all (n) Wendy(s).",
3206222275: "Reach the goal while wearing a Cannon Box.",
3314955857: "Reach the goal as Link.",
3342591980: "Reach the goal while you have Super Star invincibility.",
3346433512: "Reach the goal after defeating at least/all (n) Goombrat(s)/Goombud(s).",
3348058176: "Reach the goal after grabbing at least/all (n) 10-Coin(s).",
3353006607: "Reach the goal after defeating at least/all (n) Buzzy Beetle(s).",
3392229961: "Reach the goal after defeating at least/all (n) Bowser Jr.(s).",
3437308486: "Reach the goal after defeating at least/all (n) Koopa Troopa(s).",
3459144213: "Reach the goal after defeating at least/all (n) Chain Chomp(s).",
3466227835: "Reach the goal after defeating at least/all (n) Muncher(s).",
3481362698: "Reach the goal after defeating at least/all (n) Wiggler(s).",
3513732174: "Reach the goal as SMB2 Mario.",
3649647177: "Reach the goal in a Koopa Clown Car/Junior Clown Car.",
3725246406: "Reach the goal as Spiny Mario.",
3730243509: "Reach the goal in a Koopa Troopa Car.",
3748075486: "Reach the goal after defeating at least/all (n) Piranha Plant(s)/Jumping Piranha Plant(s).",
3797704544: "Reach the goal after defeating at least/all (n) Dry Bones.",
3824561269: "Reach the goal after defeating at least/all (n) Stingby/Stingbies.",
3833342952: "Reach the goal after defeating at least/all (n) Piranha Creeper(s).",
3842179831: "Reach the goal after defeating at least/all (n) Fire Piranha Plant(s).",
3874680510: "Reach the goal after breaking at least/all (n) Crates(s).",
3974581191: "Reach the goal after defeating at least/all (n) Ludwig(s).",
3977257962: "Reach the goal as Super Mario.",
4042480826: "Reach the goal after defeating at least/all (n) Skipsqueak(s).",
4116396131: "Reach the goal after grabbing at least/all (n) Coin(s).",
4117878280: "Reach the goal after defeating at least/all (n) Magikoopa(s).",
4122555074: "Reach the goal after grabbing at least/all (n) 30-Coin(s).",
4153835197: "Reach the goal as Balloon Mario.",
4172105156: "Reach the goal while wearing a Red POW Box.",
4209535561: "Reach the Goal while riding Yoshi.",
4269094462: "Reach the goal after defeating at least/all (n) Spike Top(s).",
4293354249: "Reach the goal after defeating at least/all (n) Banzai Bill(s)."
}
```
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset consists of levels from many different Mario Maker 2 players globally and as such their titles and descriptions could contain harmful language. Harmful depictions could also be present in the level data, should you choose to render it.
|
ciempiess/ciempiess_test | 2023-08-11T19:19:33.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:cc-by-sa-4.0",
"ciempiess",
"spanish",
"mexican spanish",
"test set",
"ciempiess project",
"ciempiess-unam project",
"ciempiess test",
"region:us"
] | ciempiess | The CIEMPIESS TEST Corpus is a gender balanced corpus destined to test acoustic models for the speech recognition task. The corpus was manually transcribed and it contains audio recordings from 10 male and 10 female speakers. The CIEMPIESS TEST is one of the three corpora included at the LDC's \"CIEMPIESS Experimentation\" (LDC2019S07). | @misc{carlosmenaciempiesstest2022,
title={CIEMPIESS TEST CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.},
ldc_catalog_no={LDC2019S07},
DOI={https://doi.org/10.35111/xdx5-n815},
author={Hernandez Mena, Carlos Daniel},
journal={Linguistic Data Consortium, Philadelphia},
year={2019},
url={https://catalog.ldc.upenn.edu/LDC2019S07},
} | null | 0 | 82 | ---
annotations_creators:
- expert-generated
language:
- es
language_creators:
- other
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'CIEMPIESS TEST CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- ciempiess
- spanish
- mexican spanish
- test set
- ciempiess project
- ciempiess-unam project
- ciempiess test
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for ciempiess_test
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIEMPIESS-UNAM Project](https://ciempiess.org/)
- **Repository:** [CIEMPIESS-TEST is part of LDC2019S07](https://catalog.ldc.upenn.edu/LDC2019S07)
- **Paper:** [Creating Mexican Spanish Language Resources through the Social Service Program](https://aclanthology.org/2022.nidcp-1.4.pdf)
- **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org)
### Dataset Summary
When developing automatic speech recognition engines and any other machine learning system is a good practice to separate the test from the training data and never combined them. So, the CIEMPIESS TEST Corpus was created by this necessity of having an standard test set destined to measure the advances of the community of users of the CIEMPIESS datasets and we strongly recommend not to use the CIEMPIESS TEST for any other purpose.
The CIEMPIESS TEST Corpus is a gender balanced corpus designed to test acoustic models for the speech recognition task. It was created by recordings and human transcripts of 10 male and 10 female speakers.
The CIEMPIESS TEST Corpus is considered a CIEMPIESS dataset because it only contains audio from the same source of the first [CIEMPIESS Corpus](https://catalog.ldc.upenn.edu/LDC2015S07) and it has the word "TEST" in its name, obviously because it is recommended for test purposes only.
This corpus is part of the [CIEMPIESS Experimentation](https://catalog.ldc.upenn.edu/LDC2019S07), which is a set of three different datasets, specifically [CIEMPIESS COMPLEMENTARY](https://huggingface.co/datasets/ciempiess/ciempiess_complementary), [CIEMPIESS FEM](https://huggingface.co/datasets/ciempiess/ciempiess_fem) and [CIEMPIESS TEST](https://huggingface.co/datasets/ciempiess/ciempiess_test).
CIEMPIESS is the acronym for:
"Corpus de Investigación en Español de México del Posgrado de Ingeniería Eléctrica y Servicio Social".
### Example Usage
The CIEMPIESS TEST contains only the test split:
```python
from datasets import load_dataset
ciempiess_test = load_dataset("ciempiess/ciempiess_test")
```
It is also valid to do:
```python
from datasets import load_dataset
ciempiess_test = load_dataset("ciempiess/ciempiess_test",split="test")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The language of the corpus is Spanish with the accent of Central Mexico except for the speaker M_09 that comes from El Salvador.
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'CMPT_M_07_0074',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/86a30fdc762ba3fad1e38fbe6900ea4940d6f0070af8d56aa483701faa050d51/test/male/M_07/CMPT_M_07_0074.flac',
'array': array([-0.00192261, -0.00234985, -0.00158691, ..., -0.00839233,
-0.00900269, -0.00698853], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': 'M_07',
'gender': 'male',
'duration': 7.510000228881836,
'normalized_text': 'pues está la libertá de las posiciones de a ver quién es pasivo quién es activo blablablá muchas cosas no pero'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription
### Data Splits
The corpus counts just with the test split which has a total of 3558 speech files from 10 male speakers and 10 female speakers with a total duration of 8 hours and 8 minutes.
## Dataset Creation
### Curation Rationale
The CIEMPIESS TEST (CT) Corpus has the following characteristics:
* The CT has a total of 3558 audio files of 10 male speakers and 10 female speakers. It has a total duration of 8 hours and 8 minutes.
* The total number of audio files that come from male speakers is 1694 with a total duration of 4 hours and 3 minutes. The total number of audio files that come from female speakers is 1864 with a total duration of 4 hours and 4 minutes. So CT is perfectly balanced in gender.
* All of the speakers in the CT come from Mexico, except for the speaker M_09 that comes from El Salvador.
* Every audio file in the CT has a duration between 5 and 10 seconds approximately.
* Data in CT is classified by gender and also by speaker, so one can easily select audios from a particular set of speakers to do experiments.
* Audio files in the CT and the first [CIEMPIESS](https://catalog.ldc.upenn.edu/LDC2015S07) are all of the same type. In both, speakers talk about legal and lawyer issues. They also talk about things related to the [UNAM University](https://www.unam.mx/) and the ["Facultad de Derecho de la UNAM"](https://www.derecho.unam.mx/).
* As in the first CIEMPIESS Corpus, transcriptions in the CT were made by humans.
* Speakers in the CT are not present in any other CIEMPIESS dataset.
* Audio files in the CT are distributed in a 16khz@16bit mono format.
### Source Data
#### Initial Data Collection and Normalization
The CIEMPIESS TEST is a Radio Corpus designed to test acoustic models of automatic speech recognition and it is made out of recordings of spontaneous conversations in Spanish between a radio moderator and his guests. Most of the speech in these conversations has the accent of Central Mexico.
All the recordings that constitute the CIEMPIESS TEST come from ["RADIO-IUS"](http://www.derecho.unam.mx/cultura-juridica/radio.php), a radio station belonging to UNAM. Recordings were donated by Lic. Cesar Gabriel Alanis Merchand and Mtro. Ricardo Rojas Arevalo from the "Facultad de Derecho de la UNAM" with the condition that they have to be used for academic and research purposes only.
### Annotations
#### Annotation process
The annotation process is at follows:
* 1. A whole podcast is manually segmented keeping just the portions containing good quality speech.
* 2. A second pass os segmentation is performed; this time to separate speakers and put them in different folders.
* 3. The resulting speech files between 5 and 10 seconds are transcribed by students from different departments (computing, engineering, linguistics). Most of them are native speakers but not with a particular training as transcribers.
#### Who are the annotators?
The CIEMPIESS TEST Corpus was created by the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html) of the ["Facultad de Ingeniería"](https://www.ingenieria.unam.mx/) (FI) in the ["Universidad Nacional Autónoma de México"](https://www.unam.mx/) (UNAM) between 2016 and 2018 by Carlos Daniel Hernández Mena, head of the program.
### Personal and Sensitive Information
The dataset could contain names revealing the identity of some speakers; on the other side, the recordings come from publicly available podcasts, so, there is not a real intent of the participants to be anonymized. Anyway, you agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is challenging because it contains spontaneous speech; so, it will be helpful for the ASR community to evaluate their acoustic models in Spanish with it.
### Discussion of Biases
The dataset intents to be gender balanced. It is comprised of 10 male speakers and 10 female speakers. On the other hand the vocabulary is limited to legal issues.
### Other Known Limitations
The transcriptions in this dataset were revised by Mónica Alejandra Ruiz López during 2022 and they are slightly different from the transcriptions found at [LDC](https://catalog.ldc.upenn.edu/LDC2019S07) or at the [CIEMPIESS-UNAM Project](http://www.ciempiess.org/) official website. We strongly recommend to use these updated transcriptions; we will soon update the transcriptions in the rest of the repositories.
### Dataset Curators
The dataset was collected by students belonging to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html), it was curated by Carlos Daniel Hernández Mena and its transcriptions were manually verified by Mónica Alejandra Ruiz López during 2022.
### Licensing Information
[CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@misc{carlosmenaciempiesstest2019,
title={CIEMPIESS TEST CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.},
ldc_catalog_no={LDC2019S07},
DOI={https://doi.org/10.35111/xdx5-n815},
author={Hernandez Mena, Carlos Daniel},
journal={Linguistic Data Consortium, Philadelphia},
year={2019},
url={https://catalog.ldc.upenn.edu/LDC2019S07},
}
```
### Contributions
The authors want to thank to Alejandro V. Mena, Elena Vera and Angélica Gutiérrez for their support to the social service program: "Desarrollo de Tecnologías del Habla." We also thank to the social service students for all the hard work.
We also thank to Lic. Cesar Gabriel Alanis Merchand and Mtro. Ricardo Rojas Arevalo from the "Facultad de Derecho de la UNAM" for donating all the recordings that constitute the CIEMPIESS TEST Corpus.
Special thanks to Mónica Alejandra Ruiz López who performed a meticulous verification of the transcriptions of this dataset during 2022.
|
lewtun/titanic | 2022-12-15T17:59:50.000Z | [
"kaggle",
"region:us"
] | lewtun | null | null | null | 0 | 82 | ---
tags:
- kaggle
dataset_info:
features:
- name: PassengerId
dtype: int64
- name: Survived
dtype: int64
- name: Pclass
dtype: int64
- name: Name
dtype: string
- name: Sex
dtype: string
- name: Age
dtype: float64
- name: SibSp
dtype: int64
- name: Parch
dtype: int64
- name: Ticket
dtype: string
- name: Fare
dtype: float64
- name: Cabin
dtype: string
- name: Embarked
dtype: string
splits:
- name: test
num_bytes: 49054
num_examples: 418
- name: train
num_bytes: 103906
num_examples: 891
download_size: 61019
dataset_size: 152960
---
# Dataset Card for "titanic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
metaeval/strategy-qa | 2023-02-09T14:56:34.000Z | [
"region:us"
] | metaeval | null | null | null | 2 | 82 | Entry not found |
edarchimbaud/earnings-forecast-stocks | 2023-10-07T23:14:13.000Z | [
"task_categories:tabular-regression",
"language:en",
"license:mit",
"region:us"
] | edarchimbaud | null | null | null | 2 | 82 | ---
language:
- en
license: mit
task_categories:
- tabular-regression
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: id
dtype: int64
- name: fiscal_end
dtype: string
- name: consensus_eps_forecast
dtype: float64
- name: high_eps_forecast
dtype: float64
- name: low_eps_forecast
dtype: float64
- name: no_of_estimates
dtype: int64
- name: up
dtype: int64
- name: down
dtype: int64
splits:
- name: train
num_bytes: 8432712
num_examples: 94561
download_size: 763273
dataset_size: 8432712
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "earnings-forecast-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The earnings-forecast-sp500 dataset provides information about the earnings forecast for the S&P 500 index constituents. The dataset includes features that detail each company's fiscal end, the consensus earnings per share (EPS) forecast, the high and low EPS forecasts, the number of estimates, and the number of upward and downward revisions.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string indicating the date of the forecast.
- id (int64): An integer representing the unique identifier for the forecast.
- fiscal_end (string): A string indicating the fiscal end date for the forecast.
- consensus_eps_forecast (float64): A floating-point number representing the consensus earnings per share forecast.
- high_eps_forecast (float64): A floating-point number representing the highest earnings per share forecast.
- low_eps_forecast (float64): A floating-point number representing the lowest earnings per share forecast.
- no_of_estimates (int64): An integer representing the number of estimates contributing to the consensus forecast.
- up (int64): An integer representing the number of upward revisions to the forecast.
- down (int64): An integer representing the number of downward revisions to the forecast.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The earnings-forecast-sp500 dataset was developed to support the development of high-frequency trading algorithms and investment strategies that rely on earnings forecasts.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from financial data providers and normalized for consistency.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The earnings-forecast-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The earnings-forecast-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, earnings-forecast-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
edarchimbaud/short-interest-stocks | 2023-10-07T23:16:33.000Z | [
"task_categories:tabular-regression",
"language:en",
"license:mit",
"region:us"
] | edarchimbaud | null | null | null | 1 | 82 | ---
language:
- en
license: mit
task_categories:
- tabular-regression
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: id
dtype: int64
- name: settlement_date
dtype: timestamp[ns]
- name: interest
dtype: float64
- name: avg_daily_share_volume
dtype: float64
- name: days_to_cover
dtype: float64
splits:
- name: train
num_bytes: 8887027
num_examples: 143370
download_size: 1018696
dataset_size: 8887027
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "short-interest-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The short-interest-sp500 dataset provides short interest data for companies listed on the S&P 500 index. This includes the number of shares that have been sold short but have not yet been covered or closed out.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string representing the date when the data was collected.
- id (int64): A unique integer identifier for each data instance.
- settlement_date (timestamp[ns]): The date by which a buyer must pay for the securities delivered by the seller.
- interest (float64): A floating point number representing the short interest of the company on the specified date.
- avg_daily_share_volume (float64): A floating point number representing the average daily trading volume of the company.
- days_to_cover (float64): A floating point number representing the days to cover metric, which is the number of days volume worth of short interest.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The short-interest-sp500 dataset was created to facilitate the study of market dynamics, particularly the role of short selling.
### Source Data
#### Initial Data Collection and Normalization
The dataset was compiled from publicly available sources.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The short-interest-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The short-interest-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, short-interest-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
edarchimbaud/timeseries-1m-stocks | 2023-10-10T10:03:14.000Z | [
"task_categories:tabular-regression",
"language:en",
"license:mit",
"region:us"
] | edarchimbaud | null | null | null | 1 | 82 | ---
language:
- en
license: mit
task_categories:
- tabular-regression
dataset_info:
features:
- name: symbol
dtype: string
- name: datetime
dtype: timestamp[ns]
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: volume
dtype: float64
splits:
- name: train
num_bytes: 183638759
num_examples: 3285481
download_size: 83653543
dataset_size: 183638759
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "timeseries-1mn-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The "timeseries-1mn-sp500" dataset provides one-minute time-series data for the S&P 500 index constituents.
### Supported Tasks and Leaderboards
This dataset is suitable for tasks such as time-series forecasting, volatility prediction, and high-frequency trading strategy development.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): The ticker symbol or abbreviation used to identify the company.
- datetime (timestamp): The date and time of the stock quote, in nanoseconds.
- open (float64): The opening price of the stock at the given datetime.
- high (float64): The highest price of the stock during the given minute.
- low (float64): The lowest price of the stock during the given minute.
- close (float64): The closing price of the stock at the given datetime.
- volume (float64): The volume of the stock traded during the given minute.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The "timeseries-1mn-sp500" dataset was created to support high-frequency trading algorithms and time-series forecasting models.
### Source Data
#### Initial Data Collection and Normalization
The data was sourced from the web and normalized.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The timeseries-1mn-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The timeseries-1mn-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, timeseries-daily-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
clarin-knext/trec-covid-pl-qrels | 2023-06-07T08:11:44.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | null | 0 | 82 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
C-MTEB/ATEC | 2023-07-28T13:53:38.000Z | [
"region:us"
] | C-MTEB | null | null | null | 2 | 82 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: int32
splits:
- name: train
num_bytes: 5689779
num_examples: 62477
- name: validation
num_bytes: 1823903
num_examples: 20000
- name: test
num_bytes: 1824603
num_examples: 20000
download_size: 5073390
dataset_size: 9338285
---
# Dataset Card for "ATEC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_instruction_v1 | 2023-09-12T17:03:59.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 82 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1240694304
num_examples: 700792
- name: validation
num_bytes: 159695683
num_examples: 84560
download_size: 89672897
dataset_size: 1400389987
---
# Dataset Card for "squad_instruction_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fiveflow/koquad_v2_polyglot_tkd | 2023-09-15T15:52:16.000Z | [
"region:us"
] | fiveflow | null | null | null | 0 | 82 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 7699047417
num_examples: 50000
download_size: 1305602573
dataset_size: 7699047417
---
# Dataset Card for "koquad_v2_polyglot_tkd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cawoylel/FulaSpeechCorpora | 2023-09-22T16:10:37.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_categories:audio-classification",
"size_categories:100K<n<1M",
"language:ff",
"region:us"
] | cawoylel | null | null | null | 0 | 82 | ---
configs:
- config_name: default
data_files:
- split: pulaar
path: data/pulaar-*
- split: maacina
path: data/maacina-*
- split: liptako
path: data/liptako-*
- split: caka
path: data/caka-*
- split: bororro
path: data/bororro-*
- split: borgu
path: data/borgu-*
- split: pular
path: data/pular-*
- split: adamawa
path: data/adamawa-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: dialect
dtype: string
splits:
- name: pulaar
num_bytes: 3398551955.96
num_examples: 12880
- name: maacina
num_bytes: 2677353337.824
num_examples: 14336
- name: liptako
num_bytes: 5858678478.536
num_examples: 36828
- name: caka
num_bytes: 2790732470.205
num_examples: 14865
- name: bororro
num_bytes: 2952498447.936
num_examples: 15022
- name: borgu
num_bytes: 2849809213.278
num_examples: 13387
- name: pular
num_bytes: 2339299211.055
num_examples: 11779
- name: adamawa
num_bytes: 2225350403.136
num_examples: 13504
download_size: 20035287564
dataset_size: 25092273517.93
task_categories:
- automatic-speech-recognition
- text-to-speech
- audio-classification
language:
- ff
pretty_name: Fula Multidialectal Speech Corpora
size_categories:
- 100K<n<1M
---
# Dataset Card for "FulaSpeechCorporaNew"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
notrichardren/azaria-mitchell-diff-filtered | 2023-10-03T04:33:20.000Z | [
"region:us"
] | notrichardren | null | null | null | 0 | 82 | ---
configs:
- config_name: default
data_files:
- split: cities
path: data/cities-*
- split: companies
path: data/companies-*
- split: animals
path: data/animals-*
- split: elements
path: data/elements-*
- split: inventions
path: data/inventions-*
- split: facts
path: data/facts-*
dataset_info:
features:
- name: claim
dtype: string
- name: label
dtype: int64
- name: dataset
dtype: string
- name: qa_type
dtype: int64
- name: ind
dtype: int64
splits:
- name: cities
num_bytes: 7955
num_examples: 112
- name: companies
num_bytes: 14588
num_examples: 129
- name: animals
num_bytes: 11451
num_examples: 137
- name: elements
num_bytes: 11617
num_examples: 139
- name: inventions
num_bytes: 10559
num_examples: 127
- name: facts
num_bytes: 14809
num_examples: 159
download_size: 44699
dataset_size: 70979
---
# Dataset Card for "azaria-mitchell-diff-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
code_x_glue_cc_code_completion_token | 2023-06-12T08:13:31.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:code",
"license:c-uda",
"region:us"
] | null | Predict next code token given context of previous tokens. Models are evaluated by token level accuracy.
Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types. | @article{raychev2016probabilistic,
title={Probabilistic Model for Code with Decision Trees},
author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
journal={ACM SIGPLAN Notices},
pages={731--747},
year={2016},
publisher={ACM New York, NY, USA}
}
@inproceedings{allamanis2013mining,
title={Mining Source Code Repositories at Massive Scale using Language Modeling},
author={Allamanis, Miltiadis and Sutton, Charles},
booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
pages={207--216},
year={2013},
organization={IEEE}
}
@dataset{rafael_michael_karampatsis_2020_3628665,
author = {Rafael - Michael Karampatsis and
Hlib Babii and
Romain Robbes and
Charles Sutton and
Andrea Janes},
title = {Preprocessed Java Code Corpus},
month = jan,
year = 2020,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.3628665},
url = {https://doi.org/10.5281/zenodo.3628665}
} | null | 1 | 81 | ---
annotations_creators:
- found
language_creators:
- found
language:
- code
license:
- c-uda
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: CodeXGlueCcCodeCompletionToken
dataset_info:
- config_name: java
features:
- name: id
dtype: int32
- name: code
sequence: string
splits:
- name: train
num_bytes: 128312061
num_examples: 12934
- name: validation
num_bytes: 30259174
num_examples: 7189
- name: test
num_bytes: 43027956
num_examples: 8268
download_size: 126856519
dataset_size: 201599191
- config_name: python
features:
- name: id
dtype: int32
- name: path
dtype: string
- name: code
sequence: string
splits:
- name: train
num_bytes: 684319575
num_examples: 100000
- name: test
num_bytes: 333978088
num_examples: 50000
download_size: 199067128
dataset_size: 1018297663
---
# Dataset Card for "code_x_glue_cc_code_completion_token"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token
### Dataset Summary
CodeXGLUE CodeCompletion-token dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token
Predict next code token given context of previous tokens. Models are evaluated by token level accuracy.
Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.
### Supported Tasks and Leaderboards
- `language-modeling`: The dataset can be used to train a model for completing single code tokens.
### Languages
- Java **programming** language
- Python **programming** language
## Dataset Structure
### Data Instances
#### java
An example of 'test' looks as follows.
```
{
"code": ["<s>", "package", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "demo", ";", "import", "java", ".", "io", ".", "BufferedReader", ";", "import", "java", ".", "io", ".", "ByteArrayInputStream", ";", "import", "java", ".", "io", ".", "IOException", ";", "import", "java", ".", "io", ".", "InputStreamReader", ";", "import", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "Clara", ";", "import", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "inflater", ".", "LayoutInflaterException", ";", "import", "com", ".", "vaadin", ".", "Application", ";", "import", "com", ".", "vaadin", ".", "terminal", ".", "ThemeResource", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Button", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Button", ".", "ClickEvent", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Component", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Embedded", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "HorizontalLayout", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "HorizontalSplitPanel", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "TextArea", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "VerticalLayout", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Window", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Window", ".", "Notification", ";", "@", "SuppressWarnings", "(", "\"serial\"", ")", "public", "class", "DemoApplication", "extends", "Application", "{", "private", "DemoController", "controller", ";", "private", "TextArea", "xmlArea", ";", "private", "HorizontalSplitPanel", "split", "=", "new", "HorizontalSplitPanel", "(", ")", ";", "private", "Window", "mainWindow", ";", "@", "Override", "public", "void", "init", "(", ")", "{", "setTheme", "(", "\"clara\"", ")", ";", "setMainWindow", "(", "mainWindow", "=", "new", "Window", "(", ")", ")", ";", "controller", "=", "new", "DemoController", "(", "mainWindow", ")", ";", "mainWindow", ".", "setContent", "(", "split", ")", ";", "VerticalLayout", "editor", "=", "new", "VerticalLayout", "(", ")", ";", "editor", ".", "setSpacing", "(", "true", ")", ";", "editor", ".", "setMargin", "(", "false", ",", "false", ",", "false", ",", "true", ")", ";", "editor", ".", "setHeight", "(", "\"100%\"", ")", ";", "editor", ".", "addComponent", "(", "xmlArea", "=", "createXmlArea", "(", ")", ")", ";", "editor", ".", "setExpandRatio", "(", "xmlArea", ",", "1.0f", ")", ";", "editor", ".", "addComponent", "(", "createUpdateButton", "(", ")", ")", ";", "HorizontalLayout", "wrapper", "=", "new", "HorizontalLayout", "(", ")", ";", "wrapper", ".", "setMargin", "(", "true", ")", ";", "wrapper", ".", "setSizeFull", "(", ")", ";", "wrapper", ".", "addComponent", "(", "createLogo", "(", ")", ")", ";", "wrapper", ".", "addComponent", "(", "editor", ")", ";", "wrapper", ".", "setExpandRatio", "(", "editor", ",", "1.0f", ")", ";", "split", ".", "setFirstComponent", "(", "wrapper", ")", ";", "updateLayout", "(", ")", ";", "}", "private", "Component", "createLogo", "(", ")", "{", "Embedded", "logo", "=", "new", "Embedded", "(", "null", ",", "new", "ThemeResource", "(", "\"\"", ")", ")", ";", "logo", ".", "setHeight", "(", "\"90px\"", ")", ";", "logo", ".", "setWidth", "(", "\"90px\"", ")", ";", "return", "logo", ";", "}", "private", "TextArea", "createXmlArea", "(", ")", "{", "TextArea", "area", "=", "new", "TextArea", "(", ")", ";", "area", ".", "setStyleName", "(", "\"xml-area\"", ")", ";", "area", ".", "setSizeFull", "(", ")", ";", "area", ".", "setValue", "(", "readStartingPoint", "(", ")", ")", ";", "return", "area", ";", "}", "private", "Button", "createUpdateButton", "(", ")", "{", "return", "new", "Button", "(", "\"Update\"", ",", "new", "Button", ".", "ClickListener", "(", ")", "{", "public", "void", "buttonClick", "(", "ClickEvent", "event", ")", "{", "updateLayout", "(", ")", ";", "}", "}", ")", ";", "}", "private", "String", "readStartingPoint", "(", ")", "{", "BufferedReader", "reader", "=", "null", ";", "try", "{", "reader", "=", "new", "BufferedReader", "(", "new", "InputStreamReader", "(", "getClass", "(", ")", ".", "getClassLoader", "(", ")", ".", "getResourceAsStream", "(", "\"\"", ")", ")", ")", ";", "StringBuilder", "xml", "=", "new", "StringBuilder", "(", ")", ";", "String", "line", ";", "while", "(", "(", "line", "=", "reader", ".", "readLine", "(", ")", ")", "!=", "null", ")", "{", "xml", ".", "append", "(", "line", ")", ";", "xml", ".", "append", "(", "\"n\"", ")", ";", "}", "return", "xml", ".", "toString", "(", ")", ";", "}", "catch", "(", "IOException", "e", ")", "{", "e", ".", "printStackTrace", "(", ")", ";", "}", "finally", "{", "if", "(", "reader", "!=", "null", ")", "{", "try", "{", "reader", ".", "close", "(", ")", ";", "}", "catch", "(", "IOException", "e", ")", "{", "e", ".", "printStackTrace", "(", ")", ";", "}", "}", "}", "return", "null", ";", "}", "private", "void", "updateLayout", "(", ")", "{", "try", "{", "Component", "c", "=", "Clara", ".", "create", "(", "new", "ByteArrayInputStream", "(", "xmlArea", ".", "getValue", "(", ")", ".", "toString", "(", ")", ".", "getBytes", "(", ")", ")", ",", "controller", ")", ";", "split", ".", "replaceComponent", "(", "split", ".", "getSecondComponent", "(", ")", ",", "c", ")", ";", "}", "catch", "(", "LayoutInflaterException", "e", ")", "{", "mainWindow", ".", "showNotification", "(", "e", ".", "getMessage", "(", ")", ",", "Notification", ".", "TYPE_ERROR_MESSAGE", ")", ";", "}", "}", "}", "</s>"],
"id": 0
}
```
#### python
An example of 'train' looks as follows.
```
{
"code": ["<s>", "from", "bootstrap", "import", "Bootstrap", "<EOL>", "from", "fund", "import", "InstantPaymentNotificationHandler", "<EOL>", "from", "fund", "import", "ThankYouHandler", "<EOL>", "from", "view", "import", "*", "<EOL>", "mapping", "=", "[", "(", "<EOL>", "r'/'", ",", "<EOL>", "Index", "<EOL>", ")", ",", "(", "<EOL>", "r'/ipn'", ",", "<EOL>", "InstantPaymentNotificationHandler", "<EOL>", ")", ",", "(", "<EOL>", "r'/thank-you'", ",", "<EOL>", "ThankYouHandler", "<EOL>", ")", ",", "(", "<EOL>", "r'/about\\/?'", ",", "<EOL>", "About", "<EOL>", ")", ",", "(", "<EOL>", "r'/guide\\/?'", ",", "<EOL>", "Guide", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Download", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Standards", "<EOL>", ")", ",", "(", "<EOL>", "r'/community\\/?'", ",", "<EOL>", "Community", "<EOL>", ")", ",", "(", "<EOL>", "r'/news\\/?'", ",", "<EOL>", "News", "<EOL>", ")", ",", "(", "<EOL>", "r'/support\\/?'", ",", "<EOL>", "Support", "<EOL>", ")", ",", "(", "<EOL>", "r'/contact\\/?'", ",", "<EOL>", "Contact", "<EOL>", ")", ",", "(", "<EOL>", "r'/press\\/?'", ",", "<EOL>", "Press", "<EOL>", ")", ",", "(", "<EOL>", "r'/legal/terms'", ",", "<EOL>", "Terms", "<EOL>", ")", ",", "(", "<EOL>", "r'/library\\/?'", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Users", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "User", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectSuccess", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectError", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectAfterDelete", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Moderate", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Bootstrap", "<EOL>", ")", ",", "(", "<EOL>", "r'/activity'", ",", "<EOL>", "ActivityScreen", "<EOL>", ")", ",", "(", "<EOL>", "r'/txns'", ",", "<EOL>", "TxnList", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Base64Blob", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Base64Blob", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "MessageStrings", "<EOL>", ")", ",", "(", "<EOL>", "r'/.*'", ",", "<EOL>", "NotFound", "<EOL>", ")", "<EOL>", "]", "</s>"],
"id": 0,
"path": "00/wikihouse/urls.py\n"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### java
|field name| type | description |
|----------|----------------|--------------------|
|id |int32 | Index of the sample|
|code |Sequence[string]| Code Tokens |
#### python
|field name| type | description |
|----------|----------------|-----------------------------|
|id |int32 | Index of the sample |
|path |string | Original path in the dataset|
|code |Sequence[string]| Code Tokens |
### Data Splits
#### java
| |train|validation|test|
|----|----:|---------:|---:|
|java|12934| 7189|8268|
#### python
| |train |test |
|------|-----:|----:|
|python|100000|50000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@article{raychev2016probabilistic,
title={Probabilistic Model for Code with Decision Trees},
author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
journal={ACM SIGPLAN Notices},
pages={731--747},
year={2016},
publisher={ACM New York, NY, USA}
}
@inproceedings{allamanis2013mining,
title={Mining Source Code Repositories at Massive Scale using Language Modeling},
author={Allamanis, Miltiadis and Sutton, Charles},
booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
pages={207--216},
year={2013},
organization={IEEE}
}
```
The data for "java" configuration comes from:
```
@dataset{rafael_michael_karampatsis_2020_3628665,
author = {Rafael - Michael Karampatsis and
Hlib Babii and
Romain Robbes and
Charles Sutton and
Andrea Janes},
title = {Preprocessed Java Code Corpus},
month = jan,
year = 2020,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.3628665},
url = {https://doi.org/10.5281/zenodo.3628665}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
kilt_wikipedia | 2023-04-05T10:08:59.000Z | [
"region:us"
] | null | KILT-Wikipedia: Wikipedia pre-processed for KILT. | @inproceedings{fb_kilt,
author = {Fabio Petroni and
Aleksandra Piktus and
Angela Fan and
Patrick Lewis and
Majid Yazdani and
Nicola De Cao and
James Thorne and
Yacine Jernite and
Vassilis Plachouras and
Tim Rockt\"aschel and
Sebastian Riedel},
title = {{KILT:} a {B}enchmark for {K}nowledge {I}ntensive {L}anguage {T}asks},
journal = {CoRR},
archivePrefix = {arXiv},
year = {2020}, | null | 10 | 81 | ---
paperswithcode_id: null
pretty_name: KiltWikipedia
dataset_info:
features:
- name: kilt_id
dtype: string
- name: wikipedia_id
dtype: string
- name: wikipedia_title
dtype: string
- name: text
sequence:
- name: paragraph
dtype: string
- name: anchors
sequence:
- name: paragraph_id
dtype: int32
- name: start
dtype: int32
- name: end
dtype: int32
- name: text
dtype: string
- name: href
dtype: string
- name: wikipedia_title
dtype: string
- name: wikipedia_id
dtype: string
- name: categories
dtype: string
- name: wikidata_info
struct:
- name: description
dtype: string
- name: enwikiquote_title
dtype: string
- name: wikidata_id
dtype: string
- name: wikidata_label
dtype: string
- name: wikipedia_title
dtype: string
- name: aliases
sequence:
- name: alias
dtype: string
- name: history
struct:
- name: pageid
dtype: int32
- name: parentid
dtype: int32
- name: revid
dtype: int32
- name: pre_dump
dtype: bool
- name: timestamp
dtype: string
- name: url
dtype: string
config_name: '2019-08-01'
splits:
- name: full
num_bytes: 29372535718
num_examples: 5903530
download_size: 37318876722
dataset_size: 29372535718
---
# Dataset Card for "kilt_wikipedia"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/facebookresearch/KILT](https://github.com/facebookresearch/KILT)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 37.32 GB
- **Size of the generated dataset:** 29.37 GB
- **Total amount of disk used:** 66.69 GB
### Dataset Summary
KILT-Wikipedia: Wikipedia pre-processed for KILT.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### 2019-08-01
- **Size of downloaded dataset files:** 37.32 GB
- **Size of the generated dataset:** 29.37 GB
- **Total amount of disk used:** 66.69 GB
An example of 'full' looks as follows.
```
{
"anchors": {
"end": [],
"href": [],
"paragraph_id": [],
"start": [],
"text": [],
"wikipedia_id": [],
"wikipedia_title": []
},
"categories": "",
"history": {
"pageid": 0,
"parentid": 0,
"pre_dump": true,
"revid": 0,
"timestamp": "",
"url": ""
},
"kilt_id": "",
"text": {
"paragraph": []
},
"wikidata_info": {
"aliases": {
"alias": []
},
"description": "",
"enwikiquote_title": "",
"wikidata_id": "",
"wikidata_label": "",
"wikipedia_title": ""
},
"wikipedia_id": "",
"wikipedia_title": ""
}
```
### Data Fields
The data fields are the same among all splits.
#### 2019-08-01
- `kilt_id`: a `string` feature.
- `wikipedia_id`: a `string` feature.
- `wikipedia_title`: a `string` feature.
- `text`: a dictionary feature containing:
- `paragraph`: a `string` feature.
- `anchors`: a dictionary feature containing:
- `paragraph_id`: a `int32` feature.
- `start`: a `int32` feature.
- `end`: a `int32` feature.
- `text`: a `string` feature.
- `href`: a `string` feature.
- `wikipedia_title`: a `string` feature.
- `wikipedia_id`: a `string` feature.
- `categories`: a `string` feature.
- `description`: a `string` feature.
- `enwikiquote_title`: a `string` feature.
- `wikidata_id`: a `string` feature.
- `wikidata_label`: a `string` feature.
- `wikipedia_title`: a `string` feature.
- `aliases`: a dictionary feature containing:
- `alias`: a `string` feature.
- `pageid`: a `int32` feature.
- `parentid`: a `int32` feature.
- `revid`: a `int32` feature.
- `pre_dump`: a `bool` feature.
- `timestamp`: a `string` feature.
- `url`: a `string` feature.
### Data Splits
| name | full |
|----------|------:|
|2019-08-01|5903530|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{fb_kilt,
author = {Fabio Petroni and
Aleksandra Piktus and
Angela Fan and
Patrick Lewis and
Majid Yazdani and
Nicola De Cao and
James Thorne and
Yacine Jernite and
Vassilis Plachouras and
Tim Rockt"aschel and
Sebastian Riedel},
title = {{KILT:} a {B}enchmark for {K}nowledge {I}ntensive {L}anguage {T}asks},
journal = {CoRR},
archivePrefix = {arXiv},
year = {2020},
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset. |
GEM/schema_guided_dialog | 2022-10-24T15:30:26.000Z | [
"task_categories:conversational",
"annotations_creators:crowd-sourced",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"dialog-response-generation",
"arxiv:1909.05855",
"arxiv:2004.15006",
"arxiv:2002.01359",
"region:us"
] | GEM | The Schema-Guided Dialogue (SGD) dataset contains 18K multi-domain task-oriented
dialogues between a human and a virtual assistant, which covers 17 domains
ranging from banks and events to media, calendar, travel, and weather. The
language presents in the datset is only English. The SGD dataset provides a
challenging testbed for a number of tasks in task-oriented dialogue, including
language understanding, slot filling, dialogue state tracking and response
generation. For the creation of the SGD dataset, they developed a multi-domain
dialogue simulator that generates dialogue outlines over an arbitrary combination
of APIs, dialogue states and system actions. Then, they used a crowd-sourcing
procedure to paraphrase these outlines to natural language utterances. This novel
crowd-sourcing procedure preserves all annotations obtained from the simulator and
does not require any extra annotations after dialogue collection. | @inproceedings{rastogi2020towards,
title={Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset},
author={Rastogi, Abhinav and Zang, Xiaoxue and Sunkara, Srinivas and Gupta, Raghav and Khaitan, Pranav},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={34},
number={05},
pages={8689--8696},
year={2020}
} | null | 3 | 81 | ---
annotations_creators:
- crowd-sourced
language_creators:
- unknown
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- conversational
task_ids: []
pretty_name: schema_guided_dialog
tags:
- dialog-response-generation
---
# Dataset Card for GEM/schema_guided_dialog
## Dataset Description
- **Homepage:** n/a
- **Repository:** [Github[(https://github.com/google-research-datasets/dstc8-schema-guided-dialogue)
- **Paper:** https://arxiv.org/abs/1909.05855
- **Leaderboard:** N/A
- **Point of Contact:** Abhinav Rastogi
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/schema_guided_dialog).
### Dataset Summary
The GEM version of this dataset functions as a response generation dataset. The input specifies dialog acts that a model needs to verbalize. The Schema-Guided Dialog dataset is challenging since it comprises multiple domains from hotel and travel to restaurants, and a wide range of dialog acts. The context of each conversation is provided as well.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/schema_guided_dialog')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/schema_guided_dialog).
#### website
n/a
#### paper
[Arxiv](https://arxiv.org/abs/1909.05855)
#### authors
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, Pranav Khaitan, Amir Fayazi, Maria Wang, and Guan-Lin Chao
## Dataset Overview
### Where to find the Data and its Documentation
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github[(https://github.com/google-research-datasets/dstc8-schema-guided-dialogue)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[Arxiv](https://arxiv.org/abs/1909.05855)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
{
@inproceedings{rastogi2020towards,
title={Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset},
author={Rastogi, Abhinav and Zang, Xiaoxue and Sunkara, Srinivas and Gupta, Raghav and Khaitan, Pranav},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={34},
number={05},
pages={8689--8696},
year={2020}
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Abhinav Rastogi
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
schema-guided-dst@google.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The language structure is machine-generated, and the language realizations are produced by crowd workers.
The dataset paper does not provide demographic information for the crowd workers.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The Schema-Guided Dialogue (SGD) dataset contains 18K multi-domain task-oriented dialogues between a human and a virtual assistant, which covers 17 domains ranging from banks and events to media, calendar, travel, and weather.
The language presents in the datset is only English.
The SGD dataset provides a challenging testbed for a number of tasks in task-oriented dialogue, including language understanding, slot filling, dialogue state tracking and response generation.
For the creation of the SGD dataset, they developed a multi-domain dialogue simulator that generates dialogue outlines over an arbitrary combination of APIs, dialogue states and system actions. Then, they used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances.
This novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Dialog Response Generation
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
The goal of a speaker who generates the target utterance is to help users accomplish tasks including but not limited to finding flights, booking restaurants, searching for nearby events and movies.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`industry`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Google
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, Pranav Khaitan, Amir Fayazi, Maria Wang, and Guan-Lin Chao
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Google
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Wanyu Du wrote the initial data card and Yacine Jernite the data loader. Simon Mille updated the data card with the additional splits. Sebastian Gehrmann migrated the data card and loader to the v2 version and extended the missing information.
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
Each dialog instance has the following fields:
* `dialogue_id`: A unique identifier for a dialogue.
* `services`: A list of services present in the dialogue.
* `turns`: A list of annotated system or user utterances. Each turn consists of the following fields:
* `speaker`: The speaker for the turn, either `USER` or `SYSTEM`.
* `utterance`: A string containing the natural language utterance.
* `frames`: A list of frames, each frame containing annotations for a single service and consists of the following fields:
* `service`: The name of the service corresponding to the frame. The slots and intents used in the following fields are taken from the schema of this service.
* `slots`: A list of slot spans in the utterance, only provided for non-categorical slots. Each slot span contains the following fields:
* `slot`: The name of the slot.
* `start`: The index of the starting character in the utterance corresponding to the slot value.
* `exclusive_end`: The index of the character just after the last character corresponding to the slot value in the utterance.
* `actions`: A list of actions corresponding to the system. Each action has the following fields:
* `act`: The type of action.
* `slot`: (optional) A slot argument for some of the actions.
* `values`: (optional) A list of values assigned to the slot. If the values list is non-empty, then the slot must be present.
* `canonical_values`: (optional) The values in their canonicalized form as used by the service. It is a list of strings of the same length as values.
* `service_call`: (system turns only, optional) The request sent to the service. It consists of the following fields:
* `method`: The name of the intent or function of the service or API being executed.
* `parameters`: A pair of lists of the same lengths: `parameter_slot_name` contains slot names and `parameter_canonical_value` contains the corresponding values in their canonicalized form.
* `service_results`: (system turns only, optional) A list of entities containing the results obtained from the service. It is only available for turns in which a service call is made. Each entity is represented as a pair of lists of the same length: `service_slot_name` contains slot names and `service_canonical_value` contains the corresponding canonical values.
* `state`: (user turns only) The dialogue state corresponding to the service. It consists of the following fields:
* `active_intent`: The intent corresponding to the service of the frame which is currently being fulfilled by the system. It takes the value "NONE" if none of the intents are active.
* `requested_slots`: A list of slots requested by the user in the current turn.
* `slot_values`: A pair of lists of the same lengths: `slot_name` contains slot names and `slot_value_list` contains the corresponding lists of strings. For categorical slots, this list contains a single value assigned to the slot. For non-categorical slots, all the values in this list are spoken variations of each other and are equivalent (e.g, "6 pm", "six in the evening", "evening at 6" etc.).
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{'dialogue_id': '1_00000',
'services': ['Restaurants_1'],
'turns':
{'frames':
[{'actions': [{'act': [6],
'canonical_values': [['FindRestaurants']],
'slot': ['intent'],
'values': [['FindRestaurants']]}],
'service': ['Restaurants_1'],
'service_call': [{'method': '',
'parameters': {'parameter_canonical_value': [],
'parameter_slot_name': []}}],
'service_results': [{'service_results_list': []}],
'slots': [{'exclusive_end': [], 'slot': [], 'start': []}],
'state': [{'active_intent': 'FindRestaurants',
'requested_slots': [],
'slot_values': {'slot_name': [], 'slot_value_list': []}}]},
{'actions': [{'act': [13],
'canonical_values': [[]],
'slot': ['city'],
'values': [[]]}],
'service': ['Restaurants_1'],
'service_call': [{'method': '',
'parameters': {'parameter_canonical_value': [],
'parameter_slot_name': []}}],
'service_results': [{'service_results_list': []}],
'slots': [{'exclusive_end': [], 'slot': [], 'start': []}],
'state': [{'active_intent': '',
'requested_slots': [],
'slot_values': {'slot_name': [], 'slot_value_list': []}}]},
...,]}
'speaker': [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1],
'utterance': [
'I am feeling hungry so I would like to find a place to eat.',
'Do you have a specific which you want the eating place to be located at?',
'I would like for it to be in San Jose.',
'Is there a specific cuisine type you enjoy, such as Mexican, Italian or something else?',
'I usually like eating the American type of food.',
'I see that at 71 Saint Peter there is a good restaurant which is in San Jose.',
'Can you give me the address of this restaurant.',
'If you want to go to this restaurant you can find it at 71 North San Pedro Street.',
'Can you give me the phone number that I can contact them with?',
'If you want to phone them you can at 408-971-8523.',
'Is there some other restaurant which you can suggest?',
'How would you like Bazille restaurant which is situated in San Jose.',
'Do you have another restaurant matching my needs? For example a restaurant which is economical and is located in Palo Alto.',
'I see that 7 restaurants suit to what you requested. Bird Dog seems as a good restaurant and is located in Palo Alto.',
'Alright, that seems good. I would like to make a booking at this restaurant.',
'For which time do you want the booking to be?',
'I will be eating there at 11:30 am so make it for then.',
'Can you please confirm that you want to book a table for 2 at 11:30 am at the Bird Dog restaurant in Palo Alto for today.',
'That suits me well. Can you tell me if they feature live music?',
'Your booking has been made without errors, but unfortunately they do not have live music.',
'Will I be able to find liquor there? Can you give me the address of their location?',
'The restaurant is located at 420 Ramona Street. Unfortunately they do not serve alcohol at the restaurant.',
'I appreciate it very much. That would be all.',
'Have a good time!'
]}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The dataset is split into a train, validation, and test set with the following sizes:
| | Train | Validation | Test |
| --- | --- | --- | --- |
| \# of dialogues | 16142 | 2482 | 4201 |
| \# of turns | 48426 | 7446 | 12603 |
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The data is generally split i.i.d, but some topics only appear in training and some only for testing. For example, the domains Messaging, Payment, and Train are test-only.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset comprises a wide range of dialog capabilities and thus enables the evaluation of many more generation capabilities of comparable datasets. Its collection methodology ensures a high diversity but also high quality of the data.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
no
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
The domains a lot more diverse than other datasets.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
surface realization, compositionality.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`data points modified`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
We are focusing on the response-generation part of the dataset and thus reformatted the dataset to treat the service agent utterances as the targets to be generated and the previous customer utterance and the agent's dialog act as the input. We additionally reformat the dialog acts to directly conform to the format described in this [paper](https://arxiv.org/abs/2004.15006).
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
yes
#### Split Information
<!-- info: Describe how the new splits were created -->
<!-- scope: periscope -->
9 challenge sets for Schema-Guided Dialog were added to the GEM evaluation suite.
1. We created subsets of the training and development sets of 500 randomly selected inputs each.
2. We applied 5 transformations to respectively 5 sets of 500 randomly selected inputs: (i) back-translation, (ii)-(iii) introduction of typographical errors, using Butterfingers with two thresholds (0.02 and 0.05), resulting in two sets with different amounts of typos introduced (there are more typos with the 0.05 threshold than with the 0.02 one), (iv) removal of final punctuations (when any), and (v) input scrambling, for which the order of the dialogue acts was randomly reassigned.
3. For the input size, we created subpopulations based on the number of dialogue acts in the input.
| DA number | Frequency English |
|---------------|-------------------|
| 1 | 5049 |
| 2 | 2517 |
| 3 | 1328 |
| 4 | 469 |
| 5 | 335 |
| 6 | 256 |
| 7 | 46 |
We also split the test data according to the type of dialogue act, represented by cardinal numbers in the dataset.
| DA type | Frequency English |
|--------------|-------------------|
| 2 | 1397 |
| 3 | 983 |
| 4 | 1027 |
| 5 | 958 |
| 9 | 72 |
| 10 | 1024 |
| 11 | 1246 |
| 12 | 500 |
| 13 | 2078 |
| 15 | 715 |
#### Split Motivation
<!-- info: What aspects of the model's generation capacities were the splits created to test? -->
<!-- scope: periscope -->
Generalization and Robustness.
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
* [Paper for dataset and DST baseline](https://arxiv.org/pdf/1909.05855.pdf)
* [DSTC8 overview paper](https://arxiv.org/pdf/2002.01359.pdf)
* [Code for DST baseline](https://github.com/google-research/google-research/tree/master/schema_guided_dst)
* [Natural language generation baseline paper](https://arxiv.org/pdf/2004.15006.pdf)
* [Blog post announcing the dataset](https://ai.googleblog.com/2019/10/introducing-schema-guided-dialogue.html)
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Surface realization and compositionally.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEURT`, `BLEU`, `ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The original paper focused on the task of dialog state prediction instead of response generation and thus did not suggest any set of metrics.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
Previous multi-domain task-oriented dialogue datsets do not sufficiently capture the real-world challenges in virtual assistants, since they cover few domains and assume a single static ontology per domain.
The SGD datset is created to cover 17 domains with over 16K dialogues, and contain multiple different APIs in most domains, many of which have overlapping functionalities but different interfaces, which reflects common real-world scenarios.
The wide range of available annotations can be used for intent prediction, slot filling, dialogue state tracking, policy imitation learning, language generation, user simulation learning, among other tasks in large-scale virtual assistants.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The goal of a speaker who generates the target utterance is to help users accomplish tasks including but not limited to finding flights, booking restaurants, searching for nearby events and movies.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Machine-generated`
#### Generation Method Link
<!-- info: If text was machine-generated for the dataset, provide a link to the generation method if available (N/A otherwise). -->
<!-- scope: periscope -->
[Github](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue)
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The dialogue outlines are first generated by a simulator. The dialogue simulator interacts with the services to generate dialogue outlines. It consists of two agents playing the roles of the user and the system, interacting with each other using a finite set of actions specified through dialogue acts over a probabilistic automaton designed to capture varied dialogue trajectories. It is worth noting that the simulation automaton does not include any domain-specific constraints: all domain-specific constraints are encoded in the schema and scenario.
The dialogue paraphrasing framework then converts the outlines generated by the simulator into a natural conversation. Users may refer to the slot values in the dialogue acts in various different ways during the conversation, e.g., “los angeles” may be referred to as “LA” or “LAX”. To introduce these natural variations in the slot values, different slot values are replaced with a randomly selected variation while being kept consistent across user turns in a dialogue. The actions are then converted to pseudo-natural language utterances using a set of manually defined action-to-text templates, and the resulting utterances for the different actions in a turn are concatenated together.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The dataset covers the following domains: Alarm, Banks, Buses, Calendar, Events, Flights, Homes, Hotels, Media, Messaging, Movies, Music, Payment, RentalCars, Restaurants, RideSharing, Services, Train, Travel, and Weather. The domain ‘Service’ includes salons, dentists, doctors etc. The ‘Alarm’, ‘Messaging’, ‘Payment’ and ‘Train’ domains are only present in the dev or test sets.
to test generalization to new domains.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
crowd-sourced
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
unknown
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
0
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
0
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
unknown
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The dialogue transformed by these steps is sent to the crowd workers to be reformulated into more natural language. One crowd worker is tasked with paraphrasing all utterances of a dialogue to ensure naturalness and coherence. The crowd workers are asked to exactly repeat the slot values in their paraphrases so that the span indices for the slots can be recovered via string matching.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
none
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
While no policy is reported, we assume that one was in place for the collection.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
The SGD dataset does not use identity categories and does not contain sensitive data.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
Due to the combination of the automatic generation and crowd rater paraphasing, the language can be very formulaic. While this may be acceptable for the model part (i.e., we may actually desire an automated agent to form formulaic responses), the input utterances of the simulated customers likely do not cover the entire spectrum of the English language.
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
The dialogues under each domain distributed unevenly, where the flights domain has 3644 dialogues while the payment domain only contains 222 dialogues.
Besides, all dialogues are paraphrased by crowd-workers, and it is possible that crow-workers with different culture backgrounds will exhibit biased opinions.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
Since the initial data was automatically generated, the coverage of entity names is necessarily biased. An agent thus needs to be evaluated in a more realistic environment.
|
iohadrubin/mtop | 2022-01-01T20:54:04.000Z | [
"region:us"
] | iohadrubin | null | 0 | 81 | Entry not found | ||
katanaml/cord | 2022-03-06T15:02:45.000Z | [
"region:us"
] | katanaml | https://huggingface.co/datasets/katanaml/cord | @article{park2019cord,
title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing},
author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
} | null | 1 | 81 | # CORD: A Consolidated Receipt Dataset for Post-OCR Parsing
CORD dataset is cloned from [clovaai](https://github.com/clovaai/cord) GitHub repo
- Box coordinates are normalized against image width/height
- Labels with very few occurrences are replaced with O:
```
replacing_labels = ['menu.etc', 'menu.itemsubtotal',
'menu.sub_etc', 'menu.sub_unitprice',
'menu.vatyn', 'void_menu.nm',
'void_menu.price', 'sub_total.othersvc_price']
```
Check for more info [Sparrow](https://github.com/katanaml/sparrow)
## Citation
### CORD: A Consolidated Receipt Dataset for Post-OCR Parsing
```
@article{park2019cord,
title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing},
author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
}
```
### Post-OCR parsing: building simple and robust parser via BIO tagging
```
@article{hwang2019post,
title={Post-OCR parsing: building simple and robust parser via BIO tagging},
author={Hwang, Wonseok and Kim, Seonghyeon and Yim, Jinyeong and Seo, Minjoon and Park, Seunghyun and Park, Sungrae and Lee, Junyeop and Lee, Bado and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
}
``` |
truongpdd/vietnamese_story | 2022-09-23T04:44:26.000Z | [
"region:us"
] | truongpdd | null | null | null | 0 | 81 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.