id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
yuan-yang/MALLS-v0 | 2023-05-31T20:32:14.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | yuan-yang | null | null | null | 0 | 26 | ---
license: cc-by-nc-4.0
viewer: true
task_categories:
- text-generation
language:
- en
pretty_name: MALLS NL-FOL Pairs 34K
size_categories:
- 10K<n<100K
---
# MALLS NL-FOL Pairs 34K
## Dataset details
MALLS (large language **M**odel gener**A**ted natural-**L**anguage-to-first-order-**L**ogic pair**S**)
consists of 34K pairs of real-world natural language (NL) statements and the corresponding first-order logic (FOL) rules annotations.
All pairs are generated by prompting GPT-4 and processed to ensure the validity of the FOL rules.
Note that we did not conduct a rigorous alignment check on the pairs, meaning the FOL rule may not accurately reflect the meaning of the NL statement.
That said, we recommend treating the dataset as "silver" labels and using it for training, and using another dataset with "gold" labels for evaluation.
# Dataset Structure
The file `MALLS-v0.json` consists of the 34K pairs of the MALLS dataset; we also provide `folio_parsed.json` which consists of 2K pairs collected
and processed from the FOLIO datset. Each entry in the file is a dictionary object of the following format
```
{
'NL': <the NL statment>,
'FOL': <the FOL rule>
}
```
**License:**
Attribution-NonCommercial 4.0 International.
Since the data are collected from GPT-4, it also abides by the policy of OpenAI: https://openai.com/policies/terms-of-use
## Using the Dataset
We use MALLS to finetune a LLaMA-7B model for NL-FOL translation, namely LogicLLaMA, which achieves GPT-4 level performance.
**Project Page**
https://github.com/gblackout/LogicLLaMA
## Intended use
**Primary intended uses:**
MALLS is intended to be used for research.
## Citation
```
@article{yang2023harnessing,
title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation},
author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri},
journal={arXiv preprint arXiv:2305.15541},
year={2023}
}
``` |
garythung/trashnet | 2023-06-02T03:23:04.000Z | [
"license:mit",
"region:us"
] | garythung | null | null | null | 0 | 26 | ---
license: mit
---
|
llm-book/aio-passages-bpr-bert-base-japanese-v3 | 2023-06-30T10:30:40.000Z | [
"size_categories:1M<n<10M",
"language:ja",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | llm-book | null | null | null | 0 | 26 | ---
language:
- ja
size_categories:
- 1M<n<10M
license:
- cc-by-sa-3.0
- gfdl
dataset_info:
features:
- name: id
dtype: int32
- name: pageid
dtype: int32
- name: revid
dtype: int32
- name: text
dtype: string
- name: section
dtype: string
- name: title
dtype: string
- name: embeddings
sequence: uint8
splits:
- name: train
num_bytes: 3483313719
num_examples: 4288198
download_size: 2160522807
dataset_size: 3483313719
---
# Dataset Card for llm-book/aio-passages-bert-base-japanese-v3-bpr
書籍『大規模言語モデル入門』で使用する、「AI王」コンペティションのパッセージデータセットに BPR によるパッセージの埋め込みを適用したデータセットです。
[llm-book/aio-passages](https://huggingface.co/datasets/llm-book/aio-passages) のデータセットに対して、[llm-book/bert-base-japanese-v3-bpr-passage-encoder](https://huggingface.co/llm-book/bert-base-japanese-v3-bpr-passage-encoder) によるパッセージのバイナリベクトルが `embeddings` フィールドに追加されています。
## Licence
本データセットで利用している Wikipedia のコンテンツは、[クリエイティブ・コモンズ表示・継承ライセンス 3.0 (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/deed.ja) および [GNU 自由文書ライセンス (GFDL)](https://www.gnu.org/licenses/fdl.html) の下に配布されているものです。 |
Weni/LLM-base | 2023-08-25T18:00:38.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:pt",
"region:us"
] | Weni | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: resposta
dtype: string
- name: context
dtype: string
- name: correct_ans
dtype: int64
splits:
- name: train
num_bytes: 18628924
num_examples: 29073
download_size: 8866205
dataset_size: 18628924
task_categories:
- question-answering
language:
- pt
pretty_name: LLM_Base_QnA
size_categories:
- 10K<n<100K
---
# Dataset Card for "LLM-base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Patt/RTE_TH | 2023-06-14T16:51:34.000Z | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | null | 0 | 26 | ---
task_categories:
- text-classification
language:
- en
- th
---
# Dataset Card for RTE_TH
### Dataset Description
This dataset is Thai translated version of [RTE](https://huggingface.co/datasets/super_glue/viewer/rte) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation. |
dongyoung4091/hh-generated_flan_t5_large | 2023-06-22T07:44:33.000Z | [
"region:us"
] | dongyoung4091 | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
sequence: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1406677
num_examples: 100
download_size: 586332
dataset_size: 1406677
---
# Dataset Card for "hh-generated_flan_t5_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fujiki/guanaco_ja | 2023-07-16T15:01:30.000Z | [
"language:ja",
"license:gpl-3.0",
"region:us"
] | fujiki | null | null | null | 2 | 26 | ---
language: ja
license: gpl-3.0
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 53655938
num_examples: 110633
download_size: 30465845
dataset_size: 53655938
---
- This is a Japanese portion of the [Guanaco dataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset).
- You can also refer to other similar datasets like [inu-ai/alpaca-guanaco-japanese-gpt-1b](https://huggingface.co/inu-ai/alpaca-guanaco-japanese-gpt-1b). |
EleutherAI/race | 2023-07-03T21:27:18.000Z | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1704.04683",
"region:us"
] | EleutherAI | Race is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
dataset is collected from English examinations in China, which are designed for middle school and high school students.
The dataset can be served as the training and test sets for machine comprehension. | @article{lai2017large,
title={RACE: Large-scale ReAding Comprehension Dataset From Examinations},
author={Lai, Guokun and Xie, Qizhe and Liu, Hanxiao and Yang, Yiming and Hovy, Eduard},
journal={arXiv preprint arXiv:1704.04683},
year={2017}
} | null | 0 | 26 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: RACE
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: race
dataset_info:
---
# "race" Grouped by Article
This is a modified version of https://huggingface.co/datasets/race that returns documents grouped by article context instead of by question. **Note:** This dataset currently only contains that test set of the ```high``` subset of the data.
The original readme is contained below.
# Dataset Card for "race"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.cs.cmu.edu/~glai1/data/race/](http://www.cs.cmu.edu/~glai1/data/race/)
- **Repository:** https://github.com/qizhex/RACE_AR_baselines
- **Paper:** [RACE: Large-scale ReAding Comprehension Dataset From Examinations](https://arxiv.org/abs/1704.04683)
- **Point of Contact:** [Guokun Lai](mailto:guokun@cs.cmu.edu), [Qizhe Xie](mailto:qzxie@cs.cmu.edu)
- **Size of downloaded dataset files:** 76.33 MB
- **Size of the generated dataset:** 349.46 MB
- **Total amount of disk used:** 425.80 MB
### Dataset Summary
RACE is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
dataset is collected from English examinations in China, which are designed for middle school and high school students.
The dataset can be served as the training and test sets for machine comprehension.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### all
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 174.73 MB
- **Total amount of disk used:** 200.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "A",
"article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
"example_id": "high132.txt",
"options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
"question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
}
```
#### high
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 140.12 MB
- **Total amount of disk used:** 165.56 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "A",
"article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
"example_id": "high132.txt",
"options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
"question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
}
```
#### middle
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 34.61 MB
- **Total amount of disk used:** 60.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "B",
"article": "\"There is not enough oil in the world now. As time goes by, it becomes less and less, so what are we going to do when it runs ou...",
"example_id": "middle3.txt",
"options": ["There is more petroleum than we can use now.", "Trees are needed for some other things besides making gas.", "We got electricity from ocean tides in the old days.", "Gas wasn't used to run cars in the Second World War."],
"question": "According to the passage, which of the following statements is TRUE?"
}
```
### Data Fields
The data fields are the same among all splits.
#### all
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
#### high
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
#### middle
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|------|----:|---------:|---:|
|all |87866| 4887|4934|
|high |62445| 3451|3498|
|middle|25421| 1436|1436|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
http://www.cs.cmu.edu/~glai1/data/race/
1. RACE dataset is available for non-commercial research purpose only.
2. All passages are obtained from the Internet which is not property of Carnegie Mellon University. We are not responsible for the content nor the meaning of these passages.
3. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.
4. We reserve the right to terminate your access to the RACE dataset at any time.
### Citation Information
```
@inproceedings{lai-etal-2017-race,
title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
author = "Lai, Guokun and
Xie, Qizhe and
Liu, Hanxiao and
Yang, Yiming and
Hovy, Eduard",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D17-1082",
doi = "10.18653/v1/D17-1082",
pages = "785--794",
}
```
### Contributions
Thanks to [@abarbosa94](https://github.com/abarbosa94), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
santoshtyss/us-court-cases | 2023-07-03T14:57:31.000Z | [
"region:us"
] | santoshtyss | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 68561500135
num_examples: 4430756
- name: validation
num_bytes: 369842972
num_examples: 100000
download_size: 15853634750
dataset_size: 68931343107
---
# Dataset Card for "us-court-cases"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zan/lima-ja | 2023-07-08T13:39:45.000Z | [
"language:ja",
"license:cc-by-nc-sa-4.0",
"arxiv:2305.11206",
"region:us"
] | zan | A high-quality japanese dataset for efficient instruction tuning. | @InProceedings{huggingface:dataset,
title = {LIMA-JA: Japanese LIMA Dataset},
author={zan},
year={2023}
} | null | 2 | 26 | ---
license: cc-by-nc-sa-4.0
language:
- ja
---
# Dataset Card for LIMA-JA
## Dataset Description
This is Japanese LIMA dataset, which is translated from the [LIMA dataset](https://huggingface.co/datasets/GAIR/lima) that Meta's [LIMA model](https://arxiv.org/abs/2305.11206) (Zhou et al., 2023) was trained on.
In this process, after the Japanese translation is performed by ChatGPT, I am making some minor adjustments.
## Usage
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset('zan/lima-ja')
>>> # dataset = load_dataset('zan/lima-ja', "v1")
```
```python
>>> dataset["train"][0]
{'conversations': ['脳細胞は移動することができるのでしょうか? 移動とは、遠距離の移動(可能ならば脳内のみで)を指します。',
'この質問は比較的広範であり、脳は単にニューロンだけでなく、グリア細胞(支持細胞)や前ミトーシスの神経幹細胞も含まれることを考慮する必要があります。さらに、重要な同僚科学者たちが指摘しているように、発生段階も非常に重要であり、発達中の胎児の脳は成人の脳とは非常に異なります。\n\nしかし、さまざまな出版物を精査した結果、この質問の答えは実際には非常に単純であることがわかりました。結論として、脳細胞は移動します。\n\n成人の脳では、グリア細胞が脳内を移動します(Klämbt、2009年)。グリア細胞は多くの機能に関与していますが、移動するグリア細胞の注目すべき例は、標的の軸索を見つけるために比較的長距離を移動し、それに巻きついて絶縁性のミエリン鞘を形成するオリゴデンドロサイトです(TsaiとMiller、2002年)。\n\n神経幹細胞は損傷に対応して長距離を移動します(Imitola et al.、2004年)し、特定の幹細胞の位置(例えば、海馬や脳室下帯)から他の領域に移動します(Clarke、2003年)。\n\nミトーシス後でありながら未分化のニューロンは、魚(Scott et al.、2012年)や哺乳類、および非人猿でも成人の脳内を移動することが示されています(Sawada et al.、2011年)。\n\n驚くことではありませんが、グリア細胞、幹細胞、およびニューロンは胎児の発生中も移動します。特に、末梢機能を果たすために運命づけられた分裂後のニューロンは、神経堤から標的の位置まで比較的長い距離を移動しなければなりません(Neuroscience、第2版、Neuronal Migration)。'],
'source': 'stackexchange'}
```
## Version Description
## v1
A version that has been modified by adding about 100 changes after being translated by ChatGPT.
## v2
more modified version
(Coming soon...)
## License
If the source data of LIMA has a stricter license than CC BY-NC-SA, the LIMA dataset follows the same. Otherwise, it follows the CC BY-NC-SA license.
## Citation Information
```
@InProceedings{huggingface:dataset,
title = {LIMA-JA: Japanese LIMA Dataset for Efficient Instruction-tuning},
author = {zan},
year = {2023}
}
``` |
moli99/different_view_dataset | 2023-07-10T15:36:04.000Z | [
"region:us"
] | moli99 | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: startingImage
dtype: image
- name: prompt
dtype: string
- name: finalImage
dtype: image
splits:
- name: train
num_bytes: 3359185561.736
num_examples: 9878
- name: test
num_bytes: 1657324530.555
num_examples: 4939
- name: validation
num_bytes: 558151107.0
num_examples: 1650
download_size: 3099567192
dataset_size: 5574661199.291
---
# Dataset Card for "different_view_dataset_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
awettig/Pile-YoutubeSubtitles-0.5B-6K-opt | 2023-07-10T19:35:45.000Z | [
"region:us"
] | awettig | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6500643383
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1594423762
dataset_size: 6565589075
---
# Dataset Card for "Pile-YoutubeSubtitles-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
awettig/Pile-ArXiv-0.5B-6K-opt | 2023-07-10T19:42:58.000Z | [
"region:us"
] | awettig | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6500959920
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1581567196
dataset_size: 6565905612
---
# Dataset Card for "Pile-ArXiv-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jstet/quotes-500k | 2023-07-12T15:14:13.000Z | [
"region:us"
] | jstet | null | null | null | 0 | 26 | Taken from Kaggle: https://www.kaggle.com/datasets/manann/quotes-500k?resource=download
It was upload there from this repo: https://github.com/ShivaliGoel/Quotes-500K
Paper:
Goel, S., Madhok, R., & Garg, S. (2018). Proposing Contextually Relevant Quotes for Images. Advances in Information Retrieval. Springer. doi: 10.1007/978-3-319-76941-7_49 |
Yuhthe/vietnews | 2023-07-26T02:59:45.000Z | [
"task_categories:summarization",
"language:vi",
"region:us"
] | Yuhthe | null | null | null | 0 | 26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: guid
dtype: int64
- name: title
dtype: string
- name: abstract
dtype: string
- name: article
dtype: string
splits:
- name: train
num_bytes: 325418455
num_examples: 99134
- name: validation
num_bytes: 73397317
num_examples: 22184
- name: test
num_bytes: 74536959
num_examples: 22498
download_size: 241345943
dataset_size: 473352731
task_categories:
- summarization
language:
- vi
---
# Dataset Card for "vietnews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LawBERT-tw/LawBERT_data | 2023-08-14T12:53:13.000Z | [
"region:us"
] | LawBERT-tw | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: law
num_bytes: 67381624
num_examples: 255683
- name: law_dict
num_bytes: 941705
num_examples: 2608
- name: law_judgement
num_bytes: 767070585
num_examples: 304981
- name: law_news
num_bytes: 1487522
num_examples: 1838
- name: law_qa
num_bytes: 2908108
num_examples: 4440
- name: law_rule
num_bytes: 74330814
num_examples: 34741
download_size: 37081540
dataset_size: 914120358
---
# Dataset Card for "LawBERT_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
illuin/small_african_accented_french_test | 2023-08-04T15:59:27.000Z | [
"region:us"
] | illuin | null | null | null | 1 | 26 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: path
dtype: string
splits:
- name: test
num_bytes: 97487354.0
num_examples: 1000
download_size: 97330196
dataset_size: 97487354.0
---
# Dataset Card for "small_african_accented_french_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reichenbach/arxiv_ppr_embeds | 2023-08-06T11:31:49.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:scientific_papers",
"language:en",
"license:unknown",
"abstractive-summarization",
"arxiv:1804.05685",
"region:us"
] | reichenbach | null | null | null | 0 | 26 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: ScientificPapers
size_categories:
- 100K<n<1M
source_datasets:
- scientific_papers
task_categories:
- summarization
task_ids: []
paperswithcode_id: null
tags:
- abstractive-summarization
dataset_info:
features:
- name: article
dtype: string
- name: abstract
dtype: string
- name: embeddings
sequence: float64
splits:
- name: train
num_bytes: 8367611540
num_examples: 203037
- name: validation
num_bytes: 256178362
num_examples: 6440
- name: test
num_bytes: 255771184
num_examples: 6436
download_size: 4718720913
dataset_size: 8879561086
---
# Dataset Card for "scientific_papers"
This dataset is derived from https://huggingface.co/datasets/scientific_papers with additional creation of embeddings via https://huggingface.co/docs/transformers/model_doc/rag for Natural Questions trained Base Model.
This dataset is created for purpose of Retrieval Augmented Generation examples and experiments.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/armancohan/long-summarization
- **Paper:** [A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents](https://arxiv.org/abs/1804.05685)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Scientific papers datasets contains one sets of long and structured documents.
The datasets are obtained from ArXiv repositories.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### arxiv
- **Size of downloaded dataset files:** 4.50 GB
- **Size of the generated dataset:** 7.58 GB
- **Total amount of disk used:** 12.09 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\" we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected...",
"article": "\"the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @...",
"section_names": "[sec:introduction]introduction\n[sec:detector]data and the cleo- detector\n[sec:analysys]analysis method\n[sec:conclusion]summary"
}
```
### Data Fields
The data fields are the same among all splits.
#### arxiv
- `article`: a `string` feature.
- `abstract`: a `string` feature.
- `section_names`: a `string` feature.
- `embeddings`: a `float` 768 dimensional vector
### Data Splits
| name |train |validation|test|
|------|-----:|---------:|---:|
|arxiv |203037| 6436|6440|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Cohan_2018,
title={A Discourse-Aware Attention Model for Abstractive Summarization of
Long Documents},
url={http://dx.doi.org/10.18653/v1/n18-2097},
DOI={10.18653/v1/n18-2097},
journal={Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, Volume 2 (Short Papers)},
publisher={Association for Computational Linguistics},
author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
year={2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
medarc/mednli | 2023-09-28T21:15:27.000Z | [
"region:us"
] | medarc | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 2192185
num_examples: 11232
- name: test
num_bytes: 273023
num_examples: 1422
- name: validation
num_bytes: 280149
num_examples: 1395
download_size: 810004
dataset_size: 2745357
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for "mednli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vergarajit/1000K-reviews-az | 2023-08-10T14:28:04.000Z | [
"region:us"
] | vergarajit | null | null | null | 0 | 26 | Entry not found |
dhkim123/jy_finetune | 2023-08-21T07:54:21.000Z | [
"region:us"
] | dhkim123 | null | null | null | 0 | 26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 30579691.0
num_examples: 100
download_size: 30580727
dataset_size: 30579691.0
---
|
RikoteMaster/Emotion_Recognition_4_llama2_chat_oversampled | 2023-08-22T07:43:53.000Z | [
"region:us"
] | RikoteMaster | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: Text_processed
dtype: string
- name: Emotion
dtype: string
- name: Augmented
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 39065708
num_examples: 82848
download_size: 12633611
dataset_size: 39065708
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Emotion_Recognition_4_llama2_chat_oversampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Pravincoder/law_llm_dataSample | 2023-09-13T05:13:04.000Z | [
"license:afl-3.0",
"region:us"
] | Pravincoder | null | null | null | 0 | 26 | ---
license: afl-3.0
---
|
JAYASWAROOP/mine_laws | 2023-09-15T06:52:45.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | JAYASWAROOP | null | null | null | 0 | 26 | ---
task_categories:
- text-classification
language:
- en
--- |
NegarMov/DF_segmented_mask | 2023-09-20T16:32:59.000Z | [
"region:us"
] | NegarMov | null | null | null | 0 | 26 | Entry not found |
llm-lens/lens_vqa_sample_test | 2023-09-17T17:14:49.000Z | [
"region:us"
] | llm-lens | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: multiple_choice_answer
dtype: string
- name: answers
sequence: string
- name: id_image
dtype: int64
- name: question_id
dtype: int64
- name: question
dtype: string
- name: image
dtype: image
- name: id
dtype: int64
- name: intensive_captions_Salesforce-blip-image-captioning-large
sequence: string
splits:
- name: test
num_bytes: 1601792.0
num_examples: 10
download_size: 1595850
dataset_size: 1601792.0
---
# Dataset Card for "lens_vqa_sample_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ashu000999/medicalchatbot | 2023-09-21T09:55:22.000Z | [
"region:us"
] | ashu000999 | null | null | null | 0 | 26 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
DavidLanz/chinese-dolly-input-output-15k | 2023-09-22T02:13:53.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | DavidLanz | null | null | null | 0 | 26 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
- text-generation
language:
- zh
- en
size_categories:
- 10K<n<100K
---
Chinese-Dolly-15k 是繁體中文翻譯的Dolly instruction(Databricks)資料集,並用於 Fine tune 的問答 JSON 格式。
原來的資料集'databricks/databricks-dolly-15k'是由數千名Databricks員工根據InstructGPT論文中概述的幾種行為類別生成的遵循指示記錄的開來源資料集。這幾個行為類別包括頭腦風暴、分類、封閉型問答、生成、資訊擷取、開放類型的問答和摘要。
在知識共用署名-相同方式共用3.0(CC BY-SA 3.0)許可下,此資料集可用於任何學術或商業用途。
如果你也在做這些資料集的籌備,歡迎來聯繫我們,避免重複花錢。
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca,
author = {DavidLanz},
title = {An Instruction-following Chinese Language model, LoRA tuning on LLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-09-15}
}
```
|
chrisgru/databricks-dolly-1k | 2023-09-23T08:34:27.000Z | [
"region:us"
] | chrisgru | null | null | null | 0 | 26 | Entry not found |
Brandoko/Instruct-Recharts-750-v1 | 2023-09-26T23:21:49.000Z | [
"region:us"
] | Brandoko | null | null | null | 0 | 26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 359930
num_examples: 186
download_size: 91594
dataset_size: 359930
---
# Dataset Card for "Instruct-Recharts-750-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sambhavnoobcoder/test_secondary | 2023-09-28T14:43:22.000Z | [
"region:us"
] | Sambhavnoobcoder | null | null | null | 0 | 26 | Entry not found |
vision-paper/DHI | 2023-09-28T07:53:31.000Z | [
"region:us"
] | vision-paper | null | null | null | 0 | 26 | Entry not found |
sayan1101/llama-2-13b-subjectfinetune-grammar | 2023-10-03T12:22:56.000Z | [
"region:us"
] | sayan1101 | null | null | null | 0 | 26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Prompt
dtype: string
splits:
- name: train
num_bytes: 1250979.4995054402
num_examples: 4549
- name: test
num_bytes: 139150.50049455985
num_examples: 506
download_size: 447422
dataset_size: 1390130.0
---
# Dataset Card for "llama-2-13b-subjectfinetune-grammar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alex-tecky/common_voice_zh_hk_processed | 2023-10-01T15:52:39.000Z | [
"region:us"
] | alex-tecky | null | null | null | 0 | 26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_features
sequence:
sequence:
sequence: float32
- name: labels
sequence: int64
- name: input_length
dtype: float64
splits:
- name: train
num_bytes: 13464160656.0
num_examples: 14018
- name: test
num_bytes: 5372062988
num_examples: 5593
download_size: 3041478840
dataset_size: 18836223644.0
---
# Dataset Card for "common_voice_zh_hk_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hanifabdlh/quac-cahya-instructions-indonesians | 2023-10-02T02:13:28.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 36111776
num_examples: 86091
download_size: 18110341
dataset_size: 36111776
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-cahya-instructions-indonesians"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
asgaardlab/SampleDataset | 2023-10-06T04:06:42.000Z | [
"region:us"
] | asgaardlab | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: Buggy Image
dtype: image
- name: Correct Image
dtype: image
- name: Segmentation Image
dtype: image
- name: Description
dtype: string
- name: Object Count
dtype: int64
- name: Camera Position
struct:
- name: x
dtype: float64
- name: y
dtype: float64
- name: z
dtype: float64
- name: Tag
dtype: string
- name: Objects JSON
dtype: string
- name: Victim Name
dtype: string
- name: Victim Position
struct:
- name: x
dtype: float64
- name: y
dtype: float64
- name: z
dtype: float64
- name: Victim Screen Position
struct:
- name: height
dtype: float64
- name: serializedVersion
dtype: string
- name: width
dtype: float64
- name: x
dtype: float64
- name: y
dtype: float64
- name: Victim Color
struct:
- name: a
dtype: int64
- name: b
dtype: int64
- name: g
dtype: int64
- name: r
dtype: int64
- name: Victim Origin Name
dtype: string
- name: Victim Origin Position
struct:
- name: x
dtype: float64
- name: y
dtype: float64
- name: z
dtype: float64
- name: Victim Origin Screen Position
struct:
- name: height
dtype: float64
- name: serializedVersion
dtype: string
- name: width
dtype: float64
- name: x
dtype: float64
- name: y
dtype: float64
- name: Victim Origin Color
struct:
- name: a
dtype: int64
- name: b
dtype: int64
- name: g
dtype: int64
- name: r
dtype: int64
splits:
- name: validation
num_bytes: 32963604.0
num_examples: 63
download_size: 32370981
dataset_size: 32963604.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "SampleDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atmallen/sloppy_addition_alice_1.0_hard_4 | 2023-10-05T17:49:55.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: true_label
dtype: bool
- name: id
dtype: int64
splits:
- name: train
num_bytes: 1232468.26824
num_examples: 28842
- name: validation
num_bytes: 121744.7376
num_examples: 2848
- name: test
num_bytes: 118231.2175
num_examples: 2770
download_size: 0
dataset_size: 1472444.2233399998
---
# Dataset Card for "sloppy_addition_alice_1.0_hard_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/AnikaBasu-CyberbullyingDataset | 2023-10-04T23:37:22.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 498571
num_examples: 2955
download_size: 321067
dataset_size: 498571
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "AnikaBasu-CyberbullyingDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minh21/COVID-QA-testset-biencoder-data-75_25 | 2023-10-06T07:38:26.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 48986357
num_examples: 513
download_size: 8353824
dataset_size: 48986357
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-testset-biencoder-data-75_25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Robin246/sb_chatdatav1 | 2023-10-08T11:59:56.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | Robin246 | null | null | null | 0 | 26 | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 2672
num_examples: 49
download_size: 3343
dataset_size: 2672
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
KonstantyM/science_qa | 2023-10-08T00:23:32.000Z | [
"region:us"
] | KonstantyM | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 7497499873
num_examples: 4432703
download_size: 4282191598
dataset_size: 7497499873
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "science_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arifzanko/test_chat_summarization | 2023-10-10T01:19:22.000Z | [
"region:us"
] | arifzanko | null | null | null | 0 | 26 | Entry not found |
Luciya/llama-2-nuv-intent-noE | 2023-10-10T06:04:10.000Z | [
"region:us"
] | Luciya | null | null | null | 0 | 26 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 711010
num_examples: 1585
download_size: 0
dataset_size: 711010
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-nuv-intent-noE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tilde_model | 2022-11-03T16:31:39.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et"... | null | This is the Tilde MODEL Corpus – Multilingual Open Data for European Languages.
The data has been collected from sites allowing free use and reuse of its content, as well as from Public Sector web sites. The activities have been undertaken as part of the ODINE Open Data Incubator for Europe, which aims to support the next generation of digital businesses and fast-track the development of new products and services. The corpus includes the following parts:
Tilde MODEL - EESC is a multilingual corpus compiled from document texts of European Economic and Social Committee document portal. Source: http://dm.eesc.europa.eu/
Tilde MODEL - RAPID multilingual parallel corpus is compiled from all press releases of Press Release Database of European Commission released between 1975 and end of 2016 as available from http://europa.eu/rapid/
Tilde MODEL - ECB multilingual parallel corpus is compiled from the multilingual pages of European Central Bank web site http://ebc.europa.eu/
Tilde MODEL - EMA is a corpus compiled from texts of European Medicines Agency document portal as available in http://www.ema.europa.eu/ at the end of 2016
Tilde MODEL - World Bank is a corpus compiled from texts of World Bank as available in http://www.worldbank.org/ in 2017
Tilde MODEL - AirBaltic.com Travel Destinations is a multilingual parallel corpus compiled from description texts of AirBaltic.com travel destinations as available in https://www.airbaltic.com/en/destinations/ in 2017
Tilde MODEL - LiveRiga.com is a multilingual parallel corpus compiled from Riga tourist attractions description texts of http://liveriga.com/ web site in 2017
Tilde MODEL - Lithuanian National Philharmonic Society is a parallel corpus compiled from texts of Lithuanian National Philharmonic Society web site http://www.filharmonija.lt/ in 2017
Tilde MODEL - mupa.hu is a parallel corpus from texts of Müpa Budapest - web site of Hungarian national culture house and concert venue https://www.mupa.hu/en/ compiled in spring of 2017
Tilde MODEL - fold.lv is a parallel corpus from texts of fold.lv portal http://www.fold.lv/en/ of the best of Latvian and foreign creative industries as compiled in spring of 2017
Tilde MODEL - czechtourism.com is a multilingual parallel corpus from texts of http://czechtourism.com/ portal compiled in spring of 2017
30 languages, 274 bitexts
total number of files: 125
total number of tokens: 1.43G
total number of sentence fragments: 62.44M | Roberts Rozis, Raivis Skadins, 2017, Tilde MODEL - Multilingual Open Data for EU Languages. Proceedings of the 21th Nordic Conference of Computational Linguistics NODALIDA 2017 | null | 1 | 25 | ---
annotations_creators:
- found
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hr
- hu
- is
- it
- lt
- lv
- mt
- nl
- 'no'
- pl
- pt
- ro
- ru
- sk
- sl
- sq
- sr
- sv
- tr
- uk
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: tilde-model-corpus
pretty_name: Tilde Multilingual Open Data for European Languages
dataset_info:
- config_name: bg-el
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- el
splits:
- name: train
num_bytes: 258081
num_examples: 455
download_size: 64430
dataset_size: 258081
- config_name: cs-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: train
num_bytes: 709168
num_examples: 3100
download_size: 201503
dataset_size: 709168
- config_name: de-hr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- hr
splits:
- name: train
num_bytes: 180148538
num_examples: 683194
download_size: 49585877
dataset_size: 180148538
- config_name: en-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- 'no'
splits:
- name: train
num_bytes: 73797124
num_examples: 348141
download_size: 17852861
dataset_size: 73797124
- config_name: es-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- pt
splits:
- name: train
num_bytes: 3808423
num_examples: 13464
download_size: 1160892
dataset_size: 3808423
---
# Dataset Card for Tilde Multilingual Open Data for European Languages
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/TildeMODEL.php
- **Repository:** None
- **Paper:** https://www.aclweb.org/anthology/W17-0235.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/TildeMODEL.php
E.g.
`dataset = load_dataset("tilde_model", lang1="en", lang2="lv")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
ARTeLab/fanpage | 2022-11-17T02:49:54.000Z | [
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:original",
"language:it",
"license:unknown",
"region:us"
] | ARTeLab | null | null | null | 3 | 25 | ---
language:
- it
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100k
source_datasets:
- original
task_categories:
- summarization
---
# Dataset Card for fanpage
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Fanpage dataset, containing news articles taken from Fanpage.
There are two features:
- source: Input news article.
- target: Summary of the article.
### Supported Tasks and Leaderboards
- `abstractive-summarization`, `summarization`
### Languages
The text in the dataset is in Italian
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228)
```
@Article{info13050228,
AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo},
TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization},
JOURNAL = {Information},
VOLUME = {13},
YEAR = {2022},
NUMBER = {5},
ARTICLE-NUMBER = {228},
URL = {https://www.mdpi.com/2078-2489/13/5/228},
ISSN = {2078-2489},
ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.},
DOI = {10.3390/info13050228}
}
``` |
cestwc/adapted-paranmt5m | 2021-12-15T11:37:07.000Z | [
"region:us"
] | cestwc | null | null | null | 3 | 25 | Entry not found |
sagteam/author_profiling | 2022-08-09T12:33:07.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ru",
"licen... | sagteam | he corpus for the author profiling analysis contains texts in Russian-language which labeled for 5 tasks:
1) gender -- 13530 texts with the labels, who wrote this: text female or male;
2) age -- 13530 texts with the labels, how old the person who wrote the text. This is a number from 12 to 80. In addition, for the classification task we added 5 age groups: 1-19; 20-29; 30-39; 40-49; 50+;
3) age imitation -- 7574 texts, where crowdsource authors is asked to write three texts:
a) in their natural manner,
b) imitating the style of someone younger,
c) imitating the style of someone older;
4) gender imitation -- 5956 texts, where the crowdsource authors is asked to write texts: in their origin gender and pretending to be the opposite gender;
5) style imitation -- 5956 texts, where crowdsource authors is asked to write a text on behalf of another person of your own gender, with a distortion of the authors usual style. | \ | null | 1 | 25 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ru
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: The Corpus for the analysis of author profiling in Russian-language texts.
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
---
# Dataset Card for [author_profiling]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/sag111/Author-Profiling
- **Repository:** https://github.com/sag111/Author-Profiling
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Sboev Alexander](mailto:sag111@mail.ru)
### Dataset Summary
The corpus for the author profiling analysis contains texts in Russian-language which labeled for 5 tasks:
1) gender -- 13448 texts with the labels, who wrote this: text female or male;
2) age -- 13448 texts with the labels, how old the person who wrote the text. This is a number from 12 to 80. In addition, for the classification task we added 5 age groups: 0-19; 20-29; 30-39; 40-49; 50+;
3) age imitation -- 8460 texts, where crowdsource authors is asked to write three texts:
a) in their natural manner,
b) imitating the style of someone younger,
c) imitating the style of someone older;
4) gender imitation -- 4988 texts, where the crowdsource authors is asked to write texts: in their origin gender and pretending to be the opposite gender;
5) style imitation -- 4988 texts, where crowdsource authors is asked to write a text on behalf of another person of your own gender, with a distortion of the authors usual style.
Dataset is collected sing the Yandex.Toloka service [link](https://toloka.yandex.ru/en).
You can read the data using the following python code:
```
def load_jsonl(input_path: str) -> list:
"""
Read list of objects from a JSON lines file.
"""
data = []
with open(input_path, 'r', encoding='utf-8') as f:
for line in f:
data.append(json.loads(line.rstrip('\n|\r')))
print('Loaded {} records from {}/n'.format(len(data), input_path))
return data
path_to_file = "./data/train.jsonl"
data = load_jsonl(path_to_file)
```
or you can use HuggingFace style:
```
from datasets import load_dataset
train_df = load_dataset('sagteam/author_profiling', split='train')
valid_df = load_dataset('sagteam/author_profiling', split='validation')
test_df = load_dataset('sagteam/author_profiling', split='test')
```
#### Here are some statistics:
1. For Train file:
- No. of documents -- 9564;
- No. of unique texts -- 9553;
- Text length in characters -- min: 197, max: 2984, mean: 500.5;
- No. of documents written -- by men: 4704, by women: 4860;
- No. of unique authors -- 2344; men: 1172, women: 1172;
- Age of the authors -- min: 13, max: 80, mean: 31.2;
- No. of documents by age group -- 0-19: 813, 20-29: 4188, 30-39: 2697, 40-49: 1194, 50+: 672;
- No. of documents with gender imitation: 1215; without gender imitation: 2430; not applicable: 5919;
- No. of documents with age imitation -- younger: 1973; older: 1973; without age imitation: 1973; not applicable: 3645;
- No. of documents with style imitation: 1215; without style imitation: 2430; not applicable: 5919.
2. For Valid file:
- No. of documents -- 1320;
- No. of unique texts -- 1316;
- Text length in characters -- min: 200, max: 2809, mean: 520.8;
- No. of documents written -- by men: 633, by women: 687;
- No. of unique authors -- 336; men: 168, women: 168;
- Age of the authors -- min: 15, max: 79, mean: 32.2;
- No. of documents by age group -- 1-19: 117, 20-29: 570, 30-39: 339, 40-49: 362, 50+: 132;
- No. of documents with gender imitation: 156; without gender imitation: 312; not applicable: 852;
- No. of documents with age imitation -- younger: 284; older: 284; without age imitation: 284; not applicable: 468;
- No. of documents with style imitation: 156; without style imitation: 312; not applicable: 852.
3. For Test file:
- No. of documents -- 2564;
- No. of unique texts -- 2561;
- Text length in characters -- min: 199, max: 3981, mean: 515.6;
- No. of documents written -- by men: 1290, by women: 1274;
- No. of unique authors -- 672; men: 336, women: 336;
- Age of the authors -- min: 12, max: 67, mean: 31.8;
- No. of documents by age group -- 1-19: 195, 20-29: 1131, 30-39: 683, 40-49: 351, 50+: 204;
- No. of documents with gender imitation: 292; without gender imitation: 583; not applicable: 1689;
- No. of documents with age imitation -- younger: 563; older: 563; without age imitation: 563; not applicable: 875;
- No. of documents with style imitation: 292; without style imitation: 583; not applicable: 1689.
### Supported Tasks and Leaderboards
This dataset is intended for multi-class and multi-label text classification.
The baseline models currently achieve the following F1-weighted metrics scores (table):
| Model name | gender | age_group | gender_imitation | age_imitation | style_imitation | no_imitation | average |
| ------------------- | ------ | --------- | ---------------- | ------------- | --------------- | ------------ | ------- |
| Dummy-stratified | 0.49 | 0.29 | 0.56 | 0.32 | 0.57 | 0.55 | 0.46 |
| Dummy-uniform | 0.49 | 0.23 | 0.51 | 0.32 | 0.51 | 0.51 | 0.43 |
| Dummy-most_frequent | 0.34 | 0.27 | 0.53 | 0.17 | 0.53 | 0.53 | 0.40 |
| LinearSVC + TF-IDF | 0.67 | 0.37 | 0.62 | 0.72 | 0.71 | 0.71 | 0.63 |
### Languages
The text in the dataset is in Russian.
## Dataset Structure
### Data Instances
Each instance is a text in Russian with some author profiling annotations.
An example for an instance from the dataset is shown below:
```
{
'id': 'crowdsource_4916',
'text': 'Ты очень симпатичный, Я давно не с кем не встречалась. Ты мне сильно понравился, ты умный интересный и удивительный, приходи ко мне в гости , у меня есть вкусное вино , и приготовлю вкусный ужин, посидим пообщаемся, узнаем друг друга поближе.',
'account_id': 'account_#1239',
'author_id': 411,
'age': 22,
'age_group': '20-29',
'gender': 'male',
'no_imitation': 'with_any_imitation',
'age_imitation': 'None',
'gender_imitation': 'with_gender_imitation',
'style_imitation': 'no_style_imitation'
}
```
### Data Fields
Data Fields includes:
- id -- unique identifier of the sample;
- text -- authors text written by a crowdsourcing user;
- author_id -- unique identifier of the user;
- account_id -- unique identifier of the crowdsource account;
- age -- age annotations;
- age_group -- age group annotations;
- no_imitation -- imitation annotations.
Label codes:
- 'with_any_imitation' -- there is some imitation in the text;
- 'no_any_imitation' -- the text is written without any imitation
- age_imitation -- age imitation annotations.
Label codes:
- 'younger' -- someone younger than the author is imitated in the text;
- 'older' -- someone older than the author is imitated in the text;
- 'no_age_imitation' -- the text is written without age imitation;
- 'None' -- not supported (the text was not written for this task)
- gender_imitation -- gender imitation annotations.
Label codes:
- 'no_gender_imitation' -- the text is written without gender imitation;
- 'with_gender_imitation' -- the text is written with a gender imitation;
- 'None' -- not supported (the text was not written for this task)
- style_imitation -- style imitation annotations.
Label codes:
- 'no_style_imitation' -- the text is written without style imitation;
- 'with_style_imitation' -- the text is written with a style imitation;
- 'None' -- not supported (the text was not written for this task).
### Data Splits
The dataset includes a set of train/valid/test splits with 9564, 1320 and 2564 texts respectively.
The unique authors do not overlap between the splits.
## Dataset Creation
### Curation Rationale
The formed dataset of examples consists of texts in Russian using a crowdsourcing platform. The created dataset can be used to improve the accuracy of supervised classifiers in author profiling tasks.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from crowdsource platform. Each text was written by the author specifically for the task provided.
#### Who are the source language producers?
Russian-speaking Yandex.Toloka users.
### Annotations
#### Annotation process
We used a crowdsourcing platform to collect texts. Each respondent is asked to fill a questionnaire including their gender, age and native language.
For age imitation task the respondents are to choose a
topic out of a few suggested, and write three texts on it:
1) Text in their natural manner;
2) Text imitating the style of someone younger;
3) Text imitating the style of someone older.
For gender and style imitation task each author wrote three texts in certain different styles:
1) Text in the authors natural style;
2) Text imitating other gender style;
3) Text in a different style but without gender imitation.
The topics to choose from are the following.
- An attempt to persuade some arbitrary listener to meet the respondent at their place;
- A story about some memorable event/acquisition/rumour or whatever else the imaginary listener is supposed to enjoy;
- A story about oneself or about someone else, aiming to please the listener and win their favour;
- A description of oneself and one’s potential partner for a dating site;
- An attempt to persuade an unfamiliar person to come;
- A negative tour review.
The task does not pass checking and is considered improper work if it contains:
- Irrelevant answers to the questionnaire;
- Incoherent jumble of words;
- Chunks of text borrowed from somewhere else;
- Texts not conforming to the above list of topics.
Texts checking is performed firstly by automated search for borrowings (by an anti-plagiarism website), and then by manual review of compliance to the task.
#### Who are the annotators?
Russian-speaking Yandex.Toloka users.
### Personal and Sensitive Information
All personal data was anonymized. Each author has been assigned an impersonal, unique identifier.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Researchers at AI technology lab at NRC "Kurchatov Institute". See the [website](https://sagteam.ru/).
### Licensing Information
Apache License 2.0.
### Citation Information
If you have found our results helpful in your work, feel free to cite our publication.
```
@article{сбоев2022сравнение,
title={СРАВНЕНИЕ ТОЧНОСТЕЙ МЕТОДОВ НА ОСНОВЕ ЯЗЫКОВЫХ И ГРАФОВЫХ НЕЙРОСЕТЕВЫХ МОДЕЛЕЙ ДЛЯ ОПРЕДЕЛЕНИЯ ПРИЗНАКОВ АВТОРСКОГО ПРОФИЛЯ ПО ТЕКСТАМ НА РУССКОМ ЯЗЫКЕ},
author={Сбоев, АГ and Молошников, ИА and Рыбка, РБ and Наумов, АВ and Селиванов, АА},
journal={Вестник Национального исследовательского ядерного университета МИФИ},
volume={10},
number={6},
pages={529--539},
year={2021},
publisher={Общество с ограниченной ответственностью МАИК "Наука/Интерпериодика"}
}
```
### Contributions
Thanks to [@naumov-al](https://github.com/naumov-al) for adding this dataset.
|
enimai/MuST-C-fr | 2022-11-21T18:39:41.000Z | [
"task_categories:translation",
"language:en",
"language:fr",
"license:apache-2.0",
"region:us"
] | enimai | null | null | null | 0 | 25 | ---
license: apache-2.0
language:
- en
- fr
task_categories:
- translation
---
|
hackathon-pln-es/scientific_papers_en | 2022-04-03T23:54:33.000Z | [
"region:us"
] | hackathon-pln-es | null | null | null | 0 | 25 | Entry not found |
SetFit/amazon_massive_intent_en-US | 2022-05-06T09:08:00.000Z | [
"region:us"
] | SetFit | null | null | null | 3 | 25 | Entry not found |
NLPC-UOM/Student_feedback_analysis_dataset | 2022-10-25T10:13:19.000Z | [
"region:us"
] | NLPC-UOM | null | null | null | 1 | 25 | # README
## Annotated Student Feedback
---
annotations_creators: []
language:
- en
license:
- mit
---
This resource contains 3000 student feedback data that have been annotated for aspect terms, opinion terms, polarities of the opinion terms towards targeted aspects, document-level opinion polarities, and sentence separations.
### Folder Structure of the resource,
```bash
└───Annotated Student Feedback Data
├───Annotator_1
│ ├───Annotated_part_1
│ ├───Annotated_part_2
│ └───towe-eacl_recreation_data_set
│ ├───defomative comment removed
│ └───less than 100 lengthy comment
├───Annotator_2
│ ├───Annotated_part_3
│ ├───Annotated_part_4
│ └───Annotated_part_5
└───Annotator_3
└───Annotated_part_6
```
Each Annotated_part_# folders contain three files. Those are in XMI, XML, and ZIP formats.
XMI files contain the annotated student feedback data and XML files contain tagsets used for annotation.
Find the code for reading data from XML and XMI files in `code_for_read_annotated_data.py`
|
HuggingFaceM4/vatex | 2022-05-13T21:27:03.000Z | [
"region:us"
] | HuggingFaceM4 | VATEX is a large-scale multilingual video description dataset, which contains over 41,250 videos and 825,000 captions
in both English and Chinese. VATEX is characterized by the following major unique properties.
First, it contains both English and Chinese descriptions at scale, which can support many multilingual studies
that are constrained by monolingual datasets. Secondly, VATEX has a high number of clip-sentence pairs
with each video clip annotated with multiple unique sentences, and every caption is unique in
the whole corpus. Third, VATEX contains more comprehensive yet representative video content,
covering 600 human activities in total. Furthermore, both the English and Chinese corpora in
VATEX are lexically richer and thus allow more natural and diverse caption generation. | @InProceedings{Wang_2019_ICCV,
author = {Wang, Xin and Wu, Jiawei and Chen, Junkun and Li, Lei and Wang, Yuan-Fang and Wang, William Yang},
title = {VaTeX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
} | null | 1 | 25 | Entry not found |
BirdL/DALL-E-Cats | 2022-09-28T21:07:37.000Z | [
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | BirdL | null | null | null | 0 | 25 | ---
annotations_creators: []
language: []
language_creators: []
license:
- other
multilinguality: []
pretty_name: DALL-E Cats Dataset
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- image-classification
- unconditional-image-generation
task_ids: []
---
DALL-E-Cats is a dataset meant to produce a synthetic animal dataset. This is a successor to DALL-E-Dogs. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) |
hugginglearners/data-science-job-salaries | 2022-08-17T18:42:40.000Z | [
"license:cc0-1.0",
"region:us"
] | hugginglearners | null | null | null | 2 | 25 | ---
license:
- cc0-1.0
kaggle_id: ruchi798/data-science-job-salaries
---
# Dataset Card for Data Science Job Salaries
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/ruchi798/data-science-job-salaries
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Content
| Column | Description |
|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| work_year | The year the salary was paid. |
| experience_level | The experience level in the job during the year with the following possible values: EN Entry-level / Junior MI Mid-level / Intermediate SE Senior-level / Expert EX Executive-level / Director |
| employment_type | The type of employement for the role: PT Part-time FT Full-time CT Contract FL Freelance |
| job_title | The role worked in during the year. |
| salary | The total gross salary amount paid. |
| salary_currency | The currency of the salary paid as an ISO 4217 currency code. |
| salary_in_usd | The salary in USD (FX rate divided by avg. USD rate for the respective year via fxdata.foorilla.com). |
| employee_residence | Employee's primary country of residence in during the work year as an ISO 3166 country code. |
| remote_ratio | The overall amount of work done remotely, possible values are as follows: 0 No remote work (less than 20%) 50 Partially remote 100 Fully remote (more than 80%) |
| company_location | The country of the employer's main office or contracting branch as an ISO 3166 country code. |
| company_size | The average number of people that worked for the company during the year: S less than 50 employees (small) M 50 to 250 employees (medium) L more than 250 employees (large) |
### Acknowledgements
I'd like to thank ai-jobs.net Salaries for aggregating this data!
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@ruchi798](https://kaggle.com/ruchi798)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
allenai/multixscience_sparse_max | 2022-11-24T16:36:31.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | allenai | null | null | null | 0 | 25 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==20`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5482 | 0.2243 | 0.0547 | 0.4063 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5476 | 0.2209 | 0.0553 | 0.4026 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5480 | 0.2272 | 0.055 | 0.4039 | |
sjyhne/mapai_training_data | 2022-09-21T19:30:02.000Z | [
"task_categories:image-segmentation",
"task_ids:semantic-segmentation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"size_categories:10K<n<100K",
"license:mit",
"building-segmentation",
"region:us"
] | sjyhne | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | null | 1 | 25 | ---
annotations_creators:
- expert-generated
language: []
language_creators:
- expert-generated
license:
- mit
multilinguality: []
pretty_name: 'MapAI: Precision in Building Segmentation Dataset'
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- building-segmentation
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
---
# Dataset Card for MapAI: Precision in Building Segmentation Training Dataset
Training data for the MapAI Competition arranged by the Norwegian Mapping Authority, Centre for Artificial Intelligence Research at the University of Agder (CAIR), Norwegian Artificial Intelligence Research Consortium (NORA), AI:Hub, Norkart, and the Danish Agency for Data Supply and Infrastructure.
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nora.ai/competition/mapai-precision-in-building-segmentation/index.html
- **Repository:** https://github.com/Sjyhne/MapAI-Competition
- **Paper:** https://journals.uio.no/NMI/article/view/9849
- **Leaderboard:**
- **Point of Contact:** sander.jyhne@kartverket.no
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/Sjyhne) for adding this dataset. |
farleyknight/big_patent_5_percent | 2022-09-19T21:58:56.000Z | [
"region:us"
] | farleyknight | null | null | null | 0 | 25 | Entry not found |
RamAnanth1/lex-fridman-podcasts | 2022-12-17T21:39:56.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:summarization",
"task_ids:sentiment-analysis",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"... | RamAnanth1 | null | null | null | 0 | 25 | ---
lexicap:
- found
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: 'Lex Fridman Podcasts '
size_categories:
- n<1K
task_categories:
- text-classification
- text-generation
- summarization
task_ids:
- sentiment-analysis
- dialogue-modeling
- language-modeling
---
# Dataset Card for Lex Fridman Podcasts Dataset
This dataset is sourced from Andrej Karpathy's [Lexicap website](https://karpathy.ai/lexicap/) which contains English transcripts of Lex Fridman's wonderful podcast episodes. The transcripts were generated using OpenAI's large-sized [Whisper model]("https://github.com/openai/whisper") |
allenai/ms2_dense_mean | 2022-11-18T19:40:11.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | null | 0 | 25 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==17`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4764 | 0.2395 | 0.2271 | 0.2418 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4364 | 0.2125 | 0.2131 | 0.2074 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4481 | 0.2224 | 0.2254 | 0.2100 | |
allenai/multinews_dense_oracle | 2022-11-12T04:10:53.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | null | 0 | 25 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Multi-News
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: multi-news
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8661 | 0.6867 | 0.6867 | 0.6867 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8626 | 0.6859 | 0.6859 | 0.6859 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8625 | 0.6927 | 0.6927 | 0.6927 | |
alfredodeza/wine-ratings | 2022-10-15T13:09:06.000Z | [
"region:us"
] | alfredodeza | null | null | null | 2 | 25 | ---
dataset_info:
features:
- name: name
dtype: string
- name: region
dtype: string
- name: variety
dtype: string
- name: rating
dtype: float32
- name: notes
dtype: string
splits:
- name: test
num_bytes: 82422
num_examples: 200
- name: train
num_bytes: 13538613
num_examples: 32780
- name: validation
num_bytes: 83047
num_examples: 200
download_size: 0
dataset_size: 13704082
---
# wine-ratings
Processing, EDA, and ML on wine ratings |
tglcourse/CelebA-faces-cropped-128 | 2022-10-19T10:36:16.000Z | [
"region:us"
] | tglcourse | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: test
num_bytes: 274664364.23
num_examples: 10130
- name: train
num_bytes: 5216078696.499
num_examples: 192469
download_size: 0
dataset_size: 5490743060.729
---
# Dataset Card for "CelebA-faces-cropped-128"
Just a 128px version of the CelebA-faces dataset, which I've cropped to the face regions using dlib. Processing notebook: https://colab.research.google.com/drive/1-P5mKb5VEQrzCmpx5QWomlq0-WNXaSxn?usp=sharing
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml/KSUCCA | 2022-10-26T17:19:37.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 25 | Entry not found |
bigbio/hprd50 | 2022-12-22T15:44:46.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | HPRD50 is a dataset of randomly selected, hand-annotated abstracts of biomedical papers
referenced by the Human Protein Reference Database (HPRD). It is parsed in XML format,
splitting each abstract into sentences, and in each sentence there may be entities and
interactions between those entities. In this particular dataset, entities are all
proteins and interactions are thus protein-protein interactions.
Moreover, all entities are normalized to the HPRD database. These normalized terms are
stored in each entity's 'type' attribute in the source XML. This means the dataset can
determine e.g. that "Janus kinase 2" and "Jak2" are referencing the same normalized
entity.
Because the dataset contains entities and relations, it is suitable for Named Entity
Recognition and Relation Extraction. | @article{fundel2007relex,
title={RelEx—Relation extraction using dependency parse trees},
author={Fundel, Katrin and K{\"u}ffner, Robert and Zimmer, Ralf},
journal={Bioinformatics},
volume={23},
number={3},
pages={365--371},
year={2007},
publisher={Oxford University Press}
} | null | 0 | 25 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: HPRD50
homepage:
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- RELATION_EXTRACTION
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for HPRD50
## Dataset Description
- **Homepage:**
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE,NER
HPRD50 is a dataset of randomly selected, hand-annotated abstracts of biomedical papers
referenced by the Human Protein Reference Database (HPRD). It is parsed in XML format,
splitting each abstract into sentences, and in each sentence there may be entities and
interactions between those entities. In this particular dataset, entities are all
proteins and interactions are thus protein-protein interactions.
Moreover, all entities are normalized to the HPRD database. These normalized terms are
stored in each entity's 'type' attribute in the source XML. This means the dataset can
determine e.g. that "Janus kinase 2" and "Jak2" are referencing the same normalized
entity.
Because the dataset contains entities and relations, it is suitable for Named Entity
Recognition and Relation Extraction.
## Citation Information
```
@article{fundel2007relex,
title={RelEx—Relation extraction using dependency parse trees},
author={Fundel, Katrin and K{"u}ffner, Robert and Zimmer, Ralf},
journal={Bioinformatics},
volume={23},
number={3},
pages={365--371},
year={2007},
publisher={Oxford University Press}
}
```
|
proteinea/fluorescence | 2023-01-16T14:51:59.000Z | [
"license:mit",
"doi:10.57967/hf/1086",
"region:us"
] | proteinea | null | null | null | 0 | 25 | ---
license: mit
---
|
zeroshot/arxiv-biology | 2023-01-05T15:43:07.000Z | [
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"arxiv:1905.00075",
"region:us"
] | zeroshot | null | null | null | 3 | 25 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
---

### Dataset Curators
The original data is maintained by [ArXiv](https://arxiv.org/)
### Licensing Information
The data is under the [Creative Commons CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@misc{clement2019arxiv,
title={On the Use of ArXiv as a Dataset},
author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
year={2019},
eprint={1905.00075},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
``` |
torchgeo/eurosat | 2023-02-21T04:01:42.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | torchgeo | null | null | null | 1 | 25 | ---
license: mit
task_categories:
- image-classification
language:
- en
pretty_name: EuroSAT
size_categories:
- 10K<n<100K
---
Redistributed without modification from https://github.com/phelber/EuroSAT.
EuroSAT100 is a subset of EuroSATallBands containing only 100 images. It is intended for tutorials and demonstrations, not for benchmarking. |
brianarbuckle/cocktail_recipes | 2023-02-28T04:14:39.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-retrieval",
"task_categories:summarization",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:explanation-generation",
"task_ids:language-modelin... | brianarbuckle | null | null | null | 1 | 25 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
- fill-mask
- text-retrieval
- summarization
task_ids:
- document-retrieval
- entity-linking-retrieval
- explanation-generation
- language-modeling
- masked-language-modeling
pretty_name: Cocktail Recipes
dataset_info:
features:
- name: title
dtype: string
- name: ingredients
sequence: string
- name: directions
sequence: string
- name: misc
sequence: string
- name: source
dtype: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 301501
num_examples: 875
download_size: 96915
dataset_size: 301501
---
# Dataset Card for Cocktail Recipes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
## Dataset Description
### Dataset Summary
Cocktail Recipes Dataset for Semi-Structured Text Generation.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
```json
{"title": "Final Ward",
"ingredients": ["0.75 oz. Rye Whiskey",
"0.75 oz. Lemon Juice",
"0.75 oz. Maraschino Liqueur",
"0.75 oz. Green Chartreuse"],
"directions": ["shake on ice and strain"],
"misc":[],
"source": "Death & Co.",
"ner":["whiskey",
"chartreuse",
"maraschino liqueur"]}
```
### Data Fields
- `title` (`str`): Title of the recipe.
- `ingredients` (`list` of `str`): Ingredients.
- `directions` (`list` of `str`): Instruction steps.
- `source` (`str`): Origin of each recipe
- `ner` (`list` of `str`): NER entities.
### Data Splits
The dataset contains a single `train` split.
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
|
nanaaaa/emotion_chinese_english | 2023-03-05T10:36:14.000Z | [
"task_categories:text-classification",
"language:zh",
"language:en",
"doi:10.57967/hf/1019",
"region:us"
] | nanaaaa | The emotion_chinese_english dataset is a multilingual emotion dataset annotated by language experts under a project. The dataset can be used for tasks such as multilingual (Chinese and English) emotion classification and identification. | null | null | 5 | 25 | ---
task_categories:
- text-classification
language:
- zh
- en
--- |
soymia/boudoir-dataset | 2023-03-01T10:39:34.000Z | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"license:apache-2.0",
"region:us"
] | soymia | null | null | null | 1 | 25 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 96479861.365
num_examples: 1055
download_size: 95036573
dataset_size: 96479861.365
license: apache-2.0
task_categories:
- text-to-image
pretty_name: Boudoir Dataset
size_categories:
- 1K<n<10K
---
# Dataset Card for "boudoir-dataset"
### Dataset Summary
Images scrapped from selected Galleries on Behance. |
Sree1994/babylm_100M | 2023-03-11T07:52:29.000Z | [
"region:us"
] | Sree1994 | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 62663655
num_examples: 255000
- name: test
num_bytes: 7636573
num_examples: 35000
- name: valid
num_bytes: 7636573
num_examples: 35000
download_size: 0
dataset_size: 77936801
---
# Dataset Card for "babylm_100M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mohammadnajeeb/concrete_crack_images | 2023-03-05T19:14:43.000Z | [
"license:cc-by-4.0",
"region:us"
] | mohammadnajeeb | null | null | null | 0 | 25 | ---
license: cc-by-4.0
---
|
Francesco/printed-circuit-board | 2023-03-30T09:11:49.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': printed-circuit-board
'1': Button
'2': Capacitor
'3': Capacitor Jumper
'4': Clock
'5': Connector
'6': Diode
'7': EM
'8': Electrolytic Capacitor
'9': Ferrite Bead
'10': IC
'11': Inductor
'12': Jumper
'13': Led
'14': Pads
'15': Pins
'16': Resistor
'17': Resistor Jumper
'18': Resistor Network
'19': Switch
'20': Test Point
'21': Transistor
'22': Unknown Unlabeled
'23': iC
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: printed-circuit-board
tags:
- rf100
---
# Dataset Card for printed-circuit-board
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/printed-circuit-board
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
printed-circuit-board
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/printed-circuit-board
### Citation Information
```
@misc{ printed-circuit-board,
title = { printed circuit board Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/printed-circuit-board } },
url = { https://universe.roboflow.com/object-detection/printed-circuit-board },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
A-Roucher/english_historical_quotes | 2023-05-17T12:49:06.000Z | [
"task_categories:text-classification",
"task_categories:conversational",
"task_categories:fill-mask",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"history",
"philosophy",
"art",
"region:us"
] | A-Roucher | null | null | null | 1 | 25 | ---
license: mit
language:
- en
tags:
- history
- philosophy
- art
pretty_name: Historical Quotes - English
size_categories:
- 10K<n<100K
task_categories:
- text-classification
- conversational
- fill-mask
---
Dataset Card for English Historical Quotes
# I-Dataset Summary
english_historical_quotes is a dataset of many historical quotes.
This dataset can be used for multi-label text classification and text generation. The content of each quote is in English.
# II-Supported Tasks and Leaderboards
Multi-label text classification : The dataset can be used to train a model for text-classification, which consists of classifying quotes by author as well as by topic (using tags). Success on this task is typically measured by achieving a high or low accuracy.
Text-generation : The dataset can be used to train a model to generate quotes by fine-tuning an existing pretrained model on the corpus composed of all quotes (or quotes by author).
# III-Languages
The texts in the dataset are in English (en).
# IV-Dataset Structure
Data Instances
A JSON-formatted example of a typical instance in the dataset:
{"quote":"Almost anyone can be an author the business is to collect money and fame from this state of being.",
"author":"A. A. Milne",
"categories": "['business', 'money']"
}
### Data Fields
author : The author of the quote.
quote : The text of the quote.
tags: The tags could be characterized as topics around the quote.
### Data Splits
The dataset is one block, so that it can be further processed using Hugging Face `datasets` functions like the ``.train_test_split() method.
# V-Dataset Creation
Curation Rationale
The goal is to share good datasets with the HuggingFace community so that they can use them in NLP tasks and advance artificial intelligence.
### Source Data
The data has been aggregated from various open-access internet archives. Then it has been manually refined, duplicates and false quotes removed by me.
It is the backbone of my website [dixit.app](http://dixit.app), which allows to search historical quotes through semantic search.
# VI-Additional Informations
Dataset Curators
Aymeric Roucher
Licensing Information
This work is licensed under a MIT License. |
bakhitovd/ML_arxiv | 2023-05-19T21:47:33.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:cc0-1.0",
"region:us"
] | bakhitovd | null | null | null | 0 | 25 | ---
license: cc0-1.0
task_categories:
- summarization
language:
- en
pretty_name: ML Articles Subset of Scientific Papers
size_categories:
- 10K<n<100K
---
# Dataset Card for 'ML Articles Subset of Scientific Papers' Dataset
## Dataset Summary
The dataset consists of 32,621 instances from the 'Scientific papers' dataset, a selection of scientific papers and summaries from ArXiv repository. This subset focuses on articles that are semantically, vocabulary-wise, structurally, and meaningfully closest to articles describing machine learning. This subset was created using sentence embeddings and K-means clustering.
## Supported Tasks and Leaderboards
The dataset supports tasks related to text summarization. Particularly, the dataset was created for fine-tuning transformer models for summarization. There are no established leaderboards at this moment.
## Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
An instance in the dataset includes a scientific paper and its summary, both in English.
### Data Fields
article: The full text of the scientific paper.\
abstract: The summary of the paper.
### Data Splits
The dataset is split into:\
-training subset: 30280 articles\
-validation subset: 1196 articles\
-test subset: 1145 articles
## Dataset Creation
### Methods
The subset was created using sentence embeddings from a transformer model, SciBERT. The embeddings were clustered into 6 clusters using the K-means clustering algorithm. The cluster closest to articles strongly related to the machine learning area by cosine similarity was chosen to form this dataset.
### Source Data
The dataset is a subset of the 'Scientific papers' dataset, which includes scientific papers from the ArXiv repository.
### Social Impact
This dataset could help improve the quality of summarization models for machine learning research articles, which in turn can make such content more accessible.
### Discussion of Biases
As the dataset focuses on machine learning articles, it may not be representative of scientific papers in general or other specific domains.
### Other Known Limitations
As the dataset has been selected based on a specific methodology, it may not include all machine learning articles or may inadvertently include non-machine learning articles.
### Dataset Curators
The subset was created as part of a project aimed to build an effective summarization model for Machine Learning articles. |
sbmaruf/forai_ml_masakhane_mafand | 2023-05-25T00:11:20.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:fr",
"language:am",
"language:bm",
"lang... | sbmaruf | MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are:
- Amharic
- Bambara
- Ghomala
- Ewe
- Fon
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Mossi
- Nigerian-Pidgin
- Chichewa
- Shona
- Swahili
- Setswana
- Twi
- Wolof
- Xhosa
- Yoruba
- Zulu
The train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho
For more details see https://aclanthology.org/2022.naacl-main.223/ | @inproceedings{adelani-etal-2022-thousand,
title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation",
author = "Adelani, David and
Alabi, Jesujoba and
Fan, Angela and
Kreutzer, Julia and
Shen, Xiaoyu and
Reid, Machel and
Ruiter, Dana and
Klakow, Dietrich and
Nabende, Peter and
Chang, Ernie and
Gwadabe, Tajuddeen and
Sackey, Freshia and
Dossou, Bonaventure F. P. and
Emezue, Chris and
Leong, Colin and
Beukman, Michael and
Muhammad, Shamsuddeen and
Jarso, Guyo and
Yousuf, Oreen and
Niyongabo Rubungo, Andre and
Hacheme, Gilles and
Wairagala, Eric Peter and
Nasir, Muhammad Umair and
Ajibade, Benjamin and
Ajayi, Tunde and
Gitau, Yvonne and
Abbott, Jade and
Ahmed, Mohamed and
Ochieng, Millicent and
Aremu, Anuoluwapo and
Ogayo, Perez and
Mukiibi, Jonathan and
Ouoba Kabore, Fatoumata and
Kalipe, Godson and
Mbaye, Derguene and
Tapo, Allahsera Auguste and
Memdjokam Koagne, Victoire and
Munkoh-Buabeng, Edwin and
Wagner, Valencia and
Abdulmumin, Idris and
Awokoya, Ayodele and
Buzaaba, Happy and
Sibanda, Blessing and
Bukula, Andiswa and
Manthalu, Sam",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.223",
doi = "10.18653/v1/2022.naacl-main.223",
pages = "3053--3070",
abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.",
} | null | 1 | 25 | ---
annotations_creators:
- expert-generated
language:
- en
- fr
- am
- bm
- bbj
- ee
- fon
- ha
- ig
- lg
- mos
- ny
- pcm
- rw
- sn
- sw
- tn
- tw
- wo
- xh
- yo
- zu
language_creators:
- expert-generated
license:
- cc-by-nc-4.0
multilinguality:
- translation
- multilingual
pretty_name: mafand
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- news, mafand, masakhane
task_categories:
- translation
task_ids: []
---
An unofficial version of https://huggingface.co/datasets/masakhane/mafand
We created a different data loader for a @forai_ml project. |
howard-hou/OCR-VQA | 2023-04-24T01:29:24.000Z | [
"region:us"
] | howard-hou | null | null | null | 1 | 25 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: string
- name: questions
sequence: string
- name: answers
sequence: string
- name: ocr_tokens
sequence: string
- name: ocr_info
list:
- name: word
dtype: string
- name: bounding_box
struct:
- name: width
dtype: float64
- name: height
dtype: float64
- name: top_left_x
dtype: float64
- name: top_left_y
dtype: float64
- name: title
dtype: string
- name: authorName
dtype: string
- name: genre
dtype: string
- name: image_width
dtype: int64
- name: image_height
dtype: int64
- name: image_url
dtype: string
- name: set_name
dtype: string
splits:
- name: train
num_bytes: 7503971854.0
num_examples: 166022
- name: test
num_bytes: 928616409.0
num_examples: 20796
- name: validation
num_bytes: 920236957.0
num_examples: 20731
download_size: 2329997099
dataset_size: 9352825220.0
---
# Dataset Card for "OCR-VQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
christinacdl/binary_hate_speech | 2023-05-06T09:14:27.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"code",
"region:us"
] | christinacdl | null | null | null | 0 | 25 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- code
size_categories:
- 10K<n<100K
--- |
gsarti/iwslt2017_context | 2023-05-07T14:09:24.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ar",
"language:de",
"language:en",
"language:fr",
"language:it",
"language:ja",
"language... | gsarti | The IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system across all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional bilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean. | @inproceedings{cettolo-etal-2017-overview,
title = "Overview of the {IWSLT} 2017 Evaluation Campaign",
author = {Cettolo, Mauro and
Federico, Marcello and
Bentivogli, Luisa and
Niehues, Jan and
St{\\"u}ker, Sebastian and
Sudoh, Katsuhito and
Yoshino, Koichiro and
Federmann, Christian},
booktitle = "Proceedings of the 14th International Conference on Spoken Language Translation",
month = dec # " 14-15",
year = "2017",
address = "Tokyo, Japan",
publisher = "International Workshop on Spoken Language Translation",
url = "https://aclanthology.org/2017.iwslt-1.1",
pages = "2--14",
} | null | 1 | 25 | ---
annotations_creators:
- crowdsourced
language:
- ar
- de
- en
- fr
- it
- ja
- ko
- nl
- ro
- zh
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: IWSLT 2017
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: iwslt-2017
dataset_info:
- config_name: iwslt2017-en-it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 46647925
num_examples: 231619
- name: test
num_bytes: 305246
num_examples: 1566
- name: validation
num_bytes: 200023
num_examples: 929
download_size: 329391132
dataset_size: 47153194
- config_name: iwslt2017-en-nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 42843933
num_examples: 237240
- name: test
num_bytes: 311646
num_examples: 1777
- name: validation
num_bytes: 197814
num_examples: 1003
download_size: 329391132
dataset_size: 43353393
- config_name: iwslt2017-en-ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 44129950
num_examples: 220538
- name: test
num_bytes: 316790
num_examples: 1678
- name: validation
num_bytes: 205028
num_examples: 914
download_size: 329391132
dataset_size: 44651768
- config_name: iwslt2017-it-en
features:
- name: translation
dtype:
translation:
languages:
- it
- en
splits:
- name: train
num_bytes: 46647925
num_examples: 231619
- name: test
num_bytes: 305246
num_examples: 1566
- name: validation
num_bytes: 200023
num_examples: 929
download_size: 329391132
dataset_size: 47153194
- config_name: iwslt2017-it-nl
features:
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 43033168
num_examples: 233415
- name: test
num_bytes: 309725
num_examples: 1669
- name: validation
num_bytes: 197774
num_examples: 1001
download_size: 329391132
dataset_size: 43540667
- config_name: iwslt2017-it-ro
features:
- name: translation
dtype:
translation:
languages:
- it
- ro
splits:
- name: train
num_bytes: 44485169
num_examples: 217551
- name: test
num_bytes: 314974
num_examples: 1643
- name: validation
num_bytes: 204989
num_examples: 914
download_size: 329391132
dataset_size: 45005132
- config_name: iwslt2017-nl-en
features:
- name: translation
dtype:
translation:
languages:
- nl
- en
splits:
- name: train
num_bytes: 42843933
num_examples: 237240
- name: test
num_bytes: 311646
num_examples: 1777
- name: validation
num_bytes: 197814
num_examples: 1003
download_size: 329391132
dataset_size: 43353393
- config_name: iwslt2017-nl-it
features:
- name: translation
dtype:
translation:
languages:
- nl
- it
splits:
- name: train
num_bytes: 43033168
num_examples: 233415
- name: test
num_bytes: 309725
num_examples: 1669
- name: validation
num_bytes: 197774
num_examples: 1001
download_size: 329391132
dataset_size: 43540667
- config_name: iwslt2017-nl-ro
features:
- name: translation
dtype:
translation:
languages:
- nl
- ro
splits:
- name: train
num_bytes: 41338738
num_examples: 206920
- name: test
num_bytes: 320952
num_examples: 1680
- name: validation
num_bytes: 202380
num_examples: 913
download_size: 329391132
dataset_size: 41862070
- config_name: iwslt2017-ro-en
features:
- name: translation
dtype:
translation:
languages:
- ro
- en
splits:
- name: train
num_bytes: 44129950
num_examples: 220538
- name: test
num_bytes: 316790
num_examples: 1678
- name: validation
num_bytes: 205028
num_examples: 914
download_size: 329391132
dataset_size: 44651768
- config_name: iwslt2017-ro-it
features:
- name: translation
dtype:
translation:
languages:
- ro
- it
splits:
- name: train
num_bytes: 44485169
num_examples: 217551
- name: test
num_bytes: 314974
num_examples: 1643
- name: validation
num_bytes: 204989
num_examples: 914
download_size: 329391132
dataset_size: 45005132
- config_name: iwslt2017-ro-nl
features:
- name: translation
dtype:
translation:
languages:
- ro
- nl
splits:
- name: train
num_bytes: 41338738
num_examples: 206920
- name: test
num_bytes: 320952
num_examples: 1680
- name: validation
num_bytes: 202380
num_examples: 913
download_size: 329391132
dataset_size: 41862070
- config_name: iwslt2017-ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 56481059
num_examples: 231713
- name: test
num_bytes: 2014296
num_examples: 8583
- name: validation
num_bytes: 241206
num_examples: 888
download_size: 27748780
dataset_size: 58736561
- config_name: iwslt2017-de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 42608380
num_examples: 206112
- name: test
num_bytes: 1608474
num_examples: 8079
- name: validation
num_bytes: 210975
num_examples: 888
download_size: 16758320
dataset_size: 44427829
- config_name: iwslt2017-en-ar
features:
- name: translation
dtype:
translation:
languages:
- en
- ar
splits:
- name: train
num_bytes: 56481059
num_examples: 231713
- name: test
num_bytes: 2014296
num_examples: 8583
- name: validation
num_bytes: 241206
num_examples: 888
download_size: 29333173
dataset_size: 58736561
- config_name: iwslt2017-en-de
features:
- name: translation
dtype:
translation:
languages:
- en
- de
splits:
- name: train
num_bytes: 42608380
num_examples: 206112
- name: test
num_bytes: 1608474
num_examples: 8079
- name: validation
num_bytes: 210975
num_examples: 888
download_size: 16758334
dataset_size: 44427829
- config_name: iwslt2017-en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 49273286
num_examples: 232825
- name: test
num_bytes: 1767465
num_examples: 8597
- name: validation
num_bytes: 207579
num_examples: 890
download_size: 27699724
dataset_size: 51248330
- config_name: iwslt2017-en-ja
features:
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: train
num_bytes: 48204987
num_examples: 223108
- name: test
num_bytes: 1809007
num_examples: 8469
- name: validation
num_bytes: 208124
num_examples: 871
download_size: 26983602
dataset_size: 50222118
- config_name: iwslt2017-en-ko
features:
- name: translation
dtype:
translation:
languages:
- en
- ko
splits:
- name: train
num_bytes: 51678043
num_examples: 230240
- name: test
num_bytes: 1869793
num_examples: 8514
- name: validation
num_bytes: 219295
num_examples: 879
download_size: 19364776
dataset_size: 53767131
- config_name: iwslt2017-en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 44271004
num_examples: 231266
- name: test
num_bytes: 1605527
num_examples: 8549
- name: validation
num_bytes: 202537
num_examples: 879
download_size: 27597071
dataset_size: 46079068
- config_name: iwslt2017-fr-en
features:
- name: translation
dtype:
translation:
languages:
- fr
- en
splits:
- name: train
num_bytes: 49273286
num_examples: 232825
- name: test
num_bytes: 1767465
num_examples: 8597
- name: validation
num_bytes: 207579
num_examples: 890
download_size: 26880731
dataset_size: 51248330
- config_name: iwslt2017-ja-en
features:
- name: translation
dtype:
translation:
languages:
- ja
- en
splits:
- name: train
num_bytes: 48204987
num_examples: 223108
- name: test
num_bytes: 1809007
num_examples: 8469
- name: validation
num_bytes: 208124
num_examples: 871
download_size: 26190859
dataset_size: 50222118
- config_name: iwslt2017-ko-en
features:
- name: translation
dtype:
translation:
languages:
- ko
- en
splits:
- name: train
num_bytes: 51678043
num_examples: 230240
- name: test
num_bytes: 1869793
num_examples: 8514
- name: validation
num_bytes: 219295
num_examples: 879
download_size: 19364733
dataset_size: 53767131
- config_name: iwslt2017-zh-en
features:
- name: translation
dtype:
translation:
languages:
- zh
- en
splits:
- name: train
num_bytes: 44271004
num_examples: 231266
- name: test
num_bytes: 1605527
num_examples: 8549
- name: validation
num_bytes: 202537
num_examples: 879
download_size: 26849290
dataset_size: 46079068
---
# Dataset Card for IWSLT 2017
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2017/TED-tasks](https://sites.google.com/site/iwsltevaluation2017/TED-tasks)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Overview of the IWSLT 2017 Evaluation Campaign](https://aclanthology.org/2017.iwslt-1.1/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.24 GB
- **Size of the generated dataset:** 1.14 GB
- **Total amount of disk used:** 5.38 GB
*This repository contain a modified version of the loading script used in the official [iwslt2017](https://huggingface.co/datasets/iwslt2017) repository updated to include document and segment information for all available sentence pairs, enabling their usage for document-level and context-aware MT applications. Refer to the original repository for additional information.*
|
pszemraj/dolly_hhrlhf-text2text | 2023-05-18T20:07:42.000Z | [
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"source_datasets:mosaicml/dolly_hhrlhf",
"language:en",
"license:cc-by-sa-3.0",
"instruct",
"region:us"
] | pszemraj | null | null | null | 1 | 25 | ---
license: cc-by-sa-3.0
task_categories:
- text2text-generation
language:
- en
tags:
- instruct
size_categories:
- 10K<n<100K
source_datasets: mosaicml/dolly_hhrlhf
---
# dolly_hhrlhf-text2text
This is `mosaicml/dolly_hhrlhf` with the following changes:
- clean up/adapt `prompt` column for the `text2text-generation` task (no need for a special template)
- split the original `train` set into a 95% train and an explicit validation set (5%)
- fixed extra spaces in puncuation (as this is not a French dataset)
details on extra spaces:
```
Original sentence 1: How can I be healthy ?
Fixed sentence 1: How can I be healthy?
``` |
kunishou/cnn-dailymail-27k-ja | 2023-05-19T04:37:02.000Z | [
"license:mit",
"region:us"
] | kunishou | null | null | null | 5 | 25 | ---
license: mit
---
This dataset was created by automatically translating part of "cnn_dailymail" into Japanese.
cnn_dailymail repository
https://github.com/abisee/cnn-dailymail
cnn_dailymail
https://huggingface.co/datasets/cnn_dailymail |
tianyang/repobench-r | 2023-06-17T03:06:46.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:code",
"license:cc-by-nc-nd-4.0",
"arxiv:2306.03091",
"region:us"
] | tianyang | RepoBench is a dataset that benchmarks repository-level code auto-completion systems.
RepoBench-R denotes RepoBench for Retrieval, which is a sub-task of RepoBench,
aiming to evaluate the ability of code auto-completion systems to retrieve
relevant code snippets for next-line code completion. | @misc{liu2023repobench,
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
author={Tianyang Liu and Canwen Xu and Julian McAuley},
year={2023},
eprint={2306.03091},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 25 | ---
language_creators:
- found
language:
- code
license:
- cc-by-nc-nd-4.0
multilinguality:
- multilingual
pretty_name: RepoBench-Retrieval
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# Dataset Card for RepoBench-R
## Dataset Description
- **Homepage:** https://github.com/Leolty/repobench
- **Paper:** https://arxiv.org/abs/2306.03091
## Dataset Summary
**RepoBench-R (Retrieval)** is a subtask of **RepoBench**([GitHub](https://github.com/Leolty/repobench), [arXiv](https://arxiv.org/abs/2306.03091)), targeting the retrieval component of a repository-level auto-completion system, focusing on retrieving the most relevant code snippet from a project repository for next-line
code prediction.
## Settings
- `cff`: short for cross_file_first, indicating the cross-file module in next line is first used in the current file.
- `cfr`: short for cross_file_random, indicating the cross-file module in next line is not first used in the current file.
## Supported Tasks
The dataset has 4 subsets:
- `python_cff`: python dataset with `cff` setting.
- `python_cfr`: python dataset with `cfr` setting.
- `java_cff`: java dataset with `cff` setting.
- `java_cfr`: java dataset with `cfr` setting.
Each subset has 4 splits:
- `train_easy`: training set with easy difficulty, where the number of code snippets in the context \\(k\\) satisfies \\( 5 \leq k < 10 \\).
- `train_hard`: training set with hard difficulty, where the number of code snippets in the context \\(k\\) satisfies \\( k \geq 10 \\).
- `test_easy`: testing set with easy difficulty.
- `test_hard`: testing set with hard difficulty.
## Loading Data
For example, if you want to load the `test` `cross_file_first` `python` dataset with `easy` difficulty, you can use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("tianyang/repobench-r", "python_cff", split="test_easy")
```
> Note: The `split` argument is optional. If not provided, the entire dataset (including, train and test data with easy and hard level) will be loaded.
## Dataset Structure
```json
{
"repo_name": "repository name of the data point",
"file_path": "path/to/file",
"context": [
"snippet 1",
"snippet 2",
// ...
"snippet k"
],
"import_statement": "all import statements in the file",
"gold_snippet_idex": 2, // the index of the gold snippet in the context list, 0~k-1
"code": "the code for next-line prediction",
"next_line": "the next line of the code"
}
```
## Licensing Information
CC BY-NC-ND 4.0
## Citation Information
```bibtex
@misc{liu2023repobench,
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
author={Tianyang Liu and Canwen Xu and Julian McAuley},
year={2023},
eprint={2306.03091},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contributions
Thanks to [@Leolty](https://github.com/Leolty) for adding this dataset. |
zachary-shah/musdb18-spec-pix2pix-test | 2023-06-11T15:21:15.000Z | [
"region:us"
] | zachary-shah | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 18297334.0
num_examples: 196
download_size: 18266177
dataset_size: 18297334.0
---
# Dataset Card for "musdb18-spec-pix2pix-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mrjunos/depression-reddit-cleaned | 2023-06-17T02:03:22.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"reddit",
"Sentiment ",
"depression",
"region:us"
] | mrjunos | The dataset provided is a Depression: Reddit Dataset (Cleaned)containing approximately
7,000 labeled instances. It consists of two main features: 'text' and 'label'.
The 'text' feature contains the text data from Reddit posts related to depression, while
the 'label' feature indicates whether a post is classified as depression or not.
The raw data for this dataset was collected by web scraping Subreddits. To ensure the data's
quality and usefulness, multiple natural language processing (NLP) techniques were applied
to clean the data. The dataset exclusively consists of English-language posts, and its
primary purpose is to facilitate mental health classification tasks.
This dataset can be employed in various natural language processing tasks related to
depression,such as sentiment analysis, topic modeling, text classification, or any other NLP
task that requires labeled data pertaining to depression from Reddit. | null | null | 1 | 25 | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- en
tags:
- reddit
- 'Sentiment '
- depression
pretty_name: Depression Reddit Cleaned
size_categories:
- 1K<n<10K
---
# Depression: Reddit Dataset (Cleaned)
**~7000 Cleaned Reddit Labelled Dataset on Depression**
### Summary
- The dataset provided is a Depression: Reddit Dataset (Cleaned) containing approximately 7,000 labeled instances. It consists of two main features: 'text' and 'label'. The 'text' feature contains the text data from Reddit posts related to depression, while the 'label' feature indicates whether a post is classified as depression or not.
- The raw data for this dataset was collected by web scraping Subreddits. To ensure the data's quality and usefulness, multiple natural language processing (NLP) techniques were applied to clean the data. The dataset exclusively consists of English-language posts, and its primary purpose is to facilitate mental health classification tasks.
- This dataset can be employed in various natural language processing tasks related to depression, such as sentiment analysis, topic modeling, text classification, or any other NLP task that requires labeled data pertaining to depression from Reddit.
- Extracted from Kaggle: https://www.kaggle.com/datasets/infamouscoder/depression-reddit-cleaned |
TalTechNLP/AMIsum | 2023-06-21T12:18:51.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | TalTechNLP | null | null | null | 1 | 25 | ---
pretty_name: AMIsum
annotations_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: ami-sum
---
# Dataset Card for "AMIsum"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
-
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
AMIsum is meeting summaryzation dataset based on the AMI Meeting Corpus (https://groups.inf.ed.ac.uk/ami/corpus/). The dataset utilizes the transcripts as the source data and abstract summaries as the target data.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English
## Dataset Structure
### Data Instances
```
{'transcript': '<PM> Okay. <PM> Right. <PM> Um well this is the kick-off meeting for our our project. <PM> Um and um this is just what we're gonna be doing over the next twenty five minutes. <ME> Mm-hmm. <PM> Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. <PM> Do you want to introduce yourself again? <ME> Great. [...]', 'summary': 'The project manager introduced the upcoming project to the team members and then the team members participated in an exercise in which they drew their favorite animal and discussed what they liked about the animal. The project manager talked about the project finances and selling prices. The team then discussed various features to consider in making the remote.', 'id': 'ES2002a',
```
### Data Fields
```
transcript: Expert generated transcript.
summary: Expert generated summary.
id: Meeting id.
```
### Data Splits
|train|validation|test|
|:----|:---------|:---|
|97|20|20| |
Jumtra/jglue_jsquads_with_input | 2023-06-21T00:25:40.000Z | [
"region:us"
] | Jumtra | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 44660349
num_examples: 67301
download_size: 8923113
dataset_size: 44660349
---
# Dataset Card for "jglue_jsquads_with_input"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Glavin001/startup-interviews | 2023-06-29T05:59:47.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-2.0",
"region:us"
] | Glavin001 | null | null | null | 7 | 25 | ---
license: cc-by-nc-2.0
task_categories:
- question-answering
- text-generation
language:
- en
size_categories:
- n<1K
--- |
Einstellung/demo-salaries | 2023-06-27T23:41:27.000Z | [
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"task_ids:tabular-single-column-regression",
"task_ids:tabular-multi-label-classification",
"language_creators:crowdsourced",
"size_categories:n<1k",
"source_datasets:aijobs.net",
"language:en",
"language:es",
"license... | Einstellung | null | null | null | 1 | 25 | ---
language:
- en
- es
license: apache-2.0
tags:
- tabular
- "2023"
- Jobs
- Computer Science
language_creators:
- crowdsourced
pretty_name: pretty_name
size_categories:
- n<1k
source_datasets:
- aijobs.net
task_categories:
- tabular-regression
- tabular-classification
task_ids:
- tabular-single-column-regression
- tabular-multi-label-classification
# configs: # Optional for datasets with multiple configurations like glue.
# - sst2 # Example for glue: sst2
# - cola # Example for glue: cola
dataset_info:
features:
- name: work_year
dtype: int64
- name: experience_level
dtype: string
- name: employment_type
dtype: string
- name: job_title
dtype: string
- name: salary
dtype: int64
- name: salary_currency
dtype: string
- name: salary_in_usd
dtype: int64
- name: employee_residence
dtype: string
- name: remote_ratio
dtype: int64
- name: company_location
dtype: string
- name: company_size
dtype: string
config_name: sst2
splits:
- name: train
num_bytes: 79317110
num_examples: 87599
download_size: 35142551
dataset_size: 89789763
---
## Dataset Description
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
### Dataset Summary
Briefly summarize the dataset, its intended use and the supported tasks. Give an overview of how and why the dataset was created. The summary should explicitly mention the languages present in the dataset (possibly in broad terms, e.g. *translations between several pairs of European languages*), and describe the domain, topic, or genre covered.
### Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
- `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
### Languages
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available.
## Dataset Structure
### Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
```
{
'example_field': ...,
...
}
```
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `example_field`: description of `example_field`
Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
### Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
### Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
#### Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
#### Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
### Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
### Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
## Considerations for Using the Data
### Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
### Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
### Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
Provide the license and link to the license webpage if available.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
```
If the dataset has a [DOI](https://www.doi.org/), please provide it here.
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
eswardivi/medical_qa | 2023-06-30T07:16:32.000Z | [
"license:mit",
"region:us"
] | eswardivi | null | null | null | 0 | 25 | ---
license: mit
---
|
Lurunchik/WikiHowNFQA | 2023-07-08T21:16:53.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"multi-document NFQA",
"non-factoid QA",
"region:us"
] | Lurunchik | null | null | null | 4 | 25 | ---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- multi-document NFQA
- non-factoid QA
pretty_name: wikihowqa
size_categories:
- 10K<n<100K
---
# Dataset Card for WikiHowQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Instances](#data-instances)
- [Data Statistics](#data-statistics)
- [Dataset Information](#dataset-information)
- [Dataset Usage](#dataset-usage)
- [Additional Information](#additional-information)
- [Dataset Curators](#curators)
- [Licensing Information](#license)
- [Citation Information](#citation)
- [Considerations for Using the Data](#considerations)
- [Social Impact of Dataset](#social-impact)
- [Discussion of Biases](#biases)
- [Other Known Limitations](#limitations)
- [Data Loading](#data-loading)
<a name="dataset-description"></a>
## Dataset Description
- **Homepage:** [WikiHowQA Dataset](https://lurunchik.github.io/WikiHowQA/)
- **Repository:** [WikiHowQA Repository](https://github.com/lurunchik/WikiHowQA)
- **Paper:** [WikiHowQA Paper](https://lurunchik.github.io/WikiHowQA/data/ACL_MD_NFQA_dataset.pdf)
- **Leaderboard:** [WikiHowQA Leaderboard](https://lurunchik.github.io/WikiHowQA/leaderboard)
- **Point of Contact:** [Contact](mailto:s3802180@student.rmit.edu.au)
**WikiHowQA** is a unique collection of 'how-to' content from WikiHow, transformed into a rich dataset featuring 11,746 human-authored answers and 74,527 supporting documents. Designed for researchers, it presents a unique opportunity to tackle the challenges of creating comprehensive answers from multiple documents, and grounding those answers in the real-world context provided by the supporting documents.
<a name="dataset-structure"></a>
## Dataset Structure
### Data Fields
- `article_id`: An integer identifier for the article corresponding to article_id from WikHow API.
- `question`: The non-factoid instructional question.
- `answer`: The human-written answer to the question corresponding human-written answer article summary from [WikiHow website](https://www.wikihow.com/Main-Page).
- `related_document_urls_wayback_snapshots`: A list of URLs to web archive snapshots of related documents corresponding references from WikiHow article.
- `split`: The split of the dataset that the instance belongs to ('train', 'validation', or 'test').
- `cluster`: An integer identifier for the cluster that the instance belongs to. <!-- The dataset is split into 'train', 'validation', and 'test' such that all instances from the same cluster belong to the same split. This is to ensure that there is no intersection of paraphrased questions across different splits. If you plan to create a new split of the dataset, it is important to maintain this clustering to avoid data leakage between splits. -->
<a name="dataset-instances"></a>
### Data Instances
An example instance from the WikiHowQA dataset:
```json
{
'article_id': 1353800,
'question': 'How To Cook Pork Tenderloin',
'answer': 'To cook pork tenderloin, put it in a roasting pan and cook it in the oven for 55 minutes at 400 degrees Fahrenheit, turning it over halfway through. You can also sear the pork tenderloin on both sides in a skillet before putting it in the oven, which will reduce the cooking time to 15 minutes. If you want to grill pork tenderloin, start by preheating the grill to medium-high heat. Then, cook the tenderloin on the grill for 30-40 minutes over indirect heat, flipping it occasionally.',
'related_document_urls_wayback_snapshots': ['http://web.archive.org/web/20210605161310/https://www.allrecipes.com/recipe/236114/pork-roast-with-the-worlds-best-rub/', 'http://web.archive.org/web/20210423074902/https://www.bhg.com/recipes/how-to/food-storage-safety/using-a-meat-thermometer/', ...],
'split': 'train',
'cluster': 2635
}
```
<a name="dataset-statistics"></a>
### Dataset Statistics
- Number of human-authored answers: 11,746
- Number of supporting documents: 74,527
- Average number of documents per question: 6.3
- Average number of sentences per answer: 3.9
<a name="dataset-information"></a>
### Dataset Information
The WikiHowQA dataset is divided into two parts: the QA part and the Document Content part.
The QA part of the dataset contains questions, answers, and only links to web archive snapshots of related HTML pages and can be downloaded here.
The Document Content part contains parsed HTML content and is accessible by request and signing a Data Transfer Agreement with RMIT University.
Each dataset instance includes a question, a set of related documents, and a human-authored answer. The questions are non-factoid, requiring comprehensive, multi-sentence answers. The related documents provide the necessary information to generate an answer.
<a name="dataset-usage"></a>
## Dataset Usage
The dataset is designed for researchers and presents a unique opportunity to tackle the challenges of creating comprehensive answers from multiple documents, and grounding those answers in the real-world context provided by the supporting documents.
<a name="additional-information"></a>
## Additional Information
<a name="curators"></a>
### Dataset Curators
The WikiHowQA dataset was curated by researchers at RMIT University.
<a name="license"></a>
### Licensing Information
The QA dataset part is distributed under the Creative Commons Attribution (CC BY) license.
The Dataset Content part containing parsed HTML content is accessible by request and signing a Data Transfer Agreement with RMIT University, which allows using the dataset freely for research purposes. The form to download and sign is available on the dataset website by the link [].
<a name="citation"></a>
### Citation Information
Please cite the following paper if you use this dataset:
```bibtex
@inproceedings{bolotova2023wikihowqa,
title={WikiHowQA: A Comprehensive Benchmark for Multi-Document Non-Factoid Question Answering},
author={Bolotova, Valeriia and Blinov, Vladislav and Filippova, Sofya and Scholer, Falk and Sanderson, Mark},
booktitle="Proceedings of the 61th Conference of the Association for Computational Linguistics",
year={2023}
}
```
<a name="considerations"></a>
## Considerations for Using the Data
<a name="social-impact"></a>
### Social Impact of the Dataset
The WikiHowQA dataset is a rich resource for researchers interested in question answering, information retrieval, and natural language understanding tasks. It can help in developing models that provide comprehensive answers to how-to questions, which can be beneficial in various applications such as customer support, tutoring systems, and personal assistants. However, as with any dataset, the potential for misuse or unintended consequences exists. For example, a model trained on this dataset might be used to generate misleading or incorrect answers if not properly validated.
<a name="biases"></a>
### Discussion of Biases
The WikiHowQA dataset is derived from WikiHow, a community-driven platform. While WikiHow has guidelines to ensure the quality and neutrality of its content, biases could still be present due to the demographic and ideological characteristics of its contributors. Users of the dataset should be aware of this potential bias.
<a name="limitations"></a>
### Other Known Limitations
The dataset only contains 'how-to' questions and their answers. Therefore, it may not be suitable for tasks that require understanding of other types of questions (e.g., why, what, when, who, etc.). Additionally, while the dataset contains a large number of instances, there may still be topics or types of questions that are underrepresented.
<a name="data-loading"></a>
## Data Loading
There are two primary ways to load the QA dataset part:
1. Directly from the file (if you have the .jsonl file locally, you can load the dataset using the following Python code):
```python
import json
dataset = []
with open('wikiHowNFQA.jsonl') as f:
for l in f:
dataset.append(json.loads(l))
```
This will result in a list of dictionaries, each representing a single instance in the dataset.
2. From the Hugging Face Datasets Hub:
If the dataset is hosted on the Hugging Face Datasets Hub, you can load it directly using the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset('wikiHowNFQA')
```
This will return a DatasetDict object, which is a dictionary-like object that maps split names (e.g., 'train', 'validation', 'test') to Dataset objects. You can access a specific split like so: dataset['train']. |
oooriii/solr_fine_tunning_ca | 2023-07-24T07:54:09.000Z | [
"task_categories:summarization",
"task_categories:translation",
"size_categories:10K<n<100K",
"language:ca",
"license:cc0-1.0",
"solr",
"translate",
"nl_2_solr",
"region:us"
] | oooriii | This dataset has some search antural language sentences in catalan and their solr search language translation.
This is the original dataset:
```
load_dataset("oooriii/solr_fine_tunning_ca", '3.0.0')
```
And this is the HuggingFace translation pipeline:
```
pipeline(
task='translation_en_to_nl',
model='Helsinki-NLP/opus-mt-en-nl',
tokenizer='Helsinki-NLP/opus-mt-en-nl')
``` | \ | null | 0 | 25 | ---
license: cc0-1.0
task_categories:
- summarization
- translation
language:
- ca
tags:
- solr
- translate
- nl_2_solr
size_categories:
- 10K<n<100K
---
# dataset: dataset_final_20230705
`wc -l dataset_final_20230705.txt
15036 dataset_final_20230705.txt`
15035 rows and tab separated fields
Id Language Text Expected |
izumi-lab/oscar2301-ja-filter-ja-normal | 2023-07-29T03:16:00.000Z | [
"language:ja",
"license:cc0-1.0",
"region:us"
] | izumi-lab | null | null | null | 2 | 25 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 68837059273.1919
num_examples: 31447063
download_size: 54798731310
dataset_size: 68837059273.1919
license: cc0-1.0
language:
- ja
---
# Dataset Card for "oscar2301-ja-filter-ja-normal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FanChen0116/19100_chat_80x_slot_pvi | 2023-09-26T04:17:52.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-time
'2': B-date
'3': B-last_name
'4': B-people
'5': I-date
'6': I-people
'7': I-last_name
'8': I-first_name
'9': B-first_name
'10': B-time
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 613618
num_examples: 3277
- name: validation
num_bytes: 5405
num_examples: 32
- name: test
num_bytes: 646729
num_examples: 3731
download_size: 93581
dataset_size: 1265752
---
# Dataset Card for "19100_chat_80x_slot_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FelipeBandeiraPoatek/invoices-donut-data-v2 | 2023-07-20T21:21:21.000Z | [
"region:us"
] | FelipeBandeiraPoatek | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 234466949.0
num_examples: 425
- name: test
num_bytes: 15053216.0
num_examples: 26
- name: validation
num_bytes: 26678659.0
num_examples: 50
download_size: 197788456
dataset_size: 276198824.0
---
# Dataset Card for "invoices-donut-data-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Brendan/multiwoz_turns_v24 | 2023-07-27T01:22:40.000Z | [
"region:us"
] | Brendan | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: dialogue_id
dtype: string
- name: turn_id
dtype: int64
- name: user
dtype: string
- name: system_response
dtype: string
- name: history
sequence: string
- name: system_acts
struct:
- name: Attraction-Inform
sequence:
sequence: string
- name: Attraction-NoOffer
sequence:
sequence: string
- name: Attraction-Recommend
sequence:
sequence: string
- name: Attraction-Request
sequence:
sequence: string
- name: Attraction-Select
sequence:
sequence: string
- name: Booking-Book
sequence:
sequence: string
- name: Booking-Inform
sequence:
sequence: string
- name: Booking-NoBook
sequence:
sequence: string
- name: Booking-Request
sequence:
sequence: string
- name: Hotel-Inform
sequence:
sequence: string
- name: Hotel-NoOffer
sequence:
sequence: string
- name: Hotel-Recommend
sequence:
sequence: string
- name: Hotel-Request
sequence:
sequence: string
- name: Hotel-Select
sequence:
sequence: string
- name: Restaurant-Inform
sequence:
sequence: string
- name: Restaurant-NoOffer
sequence:
sequence: string
- name: Restaurant-Recommend
sequence:
sequence: string
- name: Restaurant-Request
sequence:
sequence: string
- name: Restaurant-Select
sequence:
sequence: string
- name: Taxi-Inform
sequence:
sequence: string
- name: Taxi-Request
sequence:
sequence: string
- name: Train-Inform
sequence:
sequence: string
- name: Train-NoOffer
sequence:
sequence: string
- name: Train-OfferBook
sequence:
sequence: string
- name: Train-OfferBooked
sequence:
sequence: string
- name: Train-Request
sequence:
sequence: string
- name: Train-Select
sequence:
sequence: string
- name: general-bye
sequence:
sequence: string
- name: general-greet
sequence:
sequence: string
- name: general-reqmore
sequence:
sequence: string
- name: general-welcome
sequence:
sequence: string
- name: belief_state
sequence:
sequence: string
- name: prev_belief_state
sequence:
sequence: string
- name: belief_state_delta
sequence:
sequence: string
- name: degenerate_user
dtype: bool
splits:
- name: train
num_bytes: 71669619
num_examples: 56719
- name: validation
num_bytes: 9862893
num_examples: 7374
- name: test
num_bytes: 9864860
num_examples: 7368
download_size: 15883931
dataset_size: 91397372
---
# Dataset Card for "multiwoz_turns_v24"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jinho8345/sroie-bio | 2023-07-29T07:03:00.000Z | [
"region:us"
] | jinho8345 | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: img
dtype: image
- name: labels
sequence: string
- name: words
sequence: string
- name: bboxes
sequence:
sequence: int64
- name: filename
dtype: string
splits:
- name: train
num_bytes: 299073901.0
num_examples: 526
- name: val
num_bytes: 59447631.0
num_examples: 100
download_size: 319041399
dataset_size: 358521532.0
---
# Dataset Card for "sroie-bio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pourmand1376/alpaca-fa-instruction | 2023-08-19T11:54:39.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:fa",
"license:apache-2.0",
"region:us"
] | pourmand1376 | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 5852058
num_examples: 5654
download_size: 2665151
dataset_size: 5852058
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
- question-answering
- conversational
language:
- fa
pretty_name: Alpaca Farsi Instruction
size_categories:
- 10K<n<100K
---
# Dataset Card for "alpaca-fa-instruction"
This dataset was first created [here](https://www.kaggle.com/datasets/amirpourmand/alpaca-farsi) and is published to huggingface according to Open-assistant standard.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
engkufizz/router-switch-configuration | 2023-08-09T14:29:13.000Z | [
"region:us"
] | engkufizz | null | null | null | 0 | 25 | Entry not found |
dsfsi/vukuzenzele-monolingual | 2023-09-27T06:13:20.000Z | [
"task_categories:translation",
"language:eng",
"language:afr",
"language:nbl",
"language:xho",
"language:zul",
"language:nso",
"language:sep",
"language:tsn",
"language:ssw",
"language:ven",
"language:tso",
"license:cc-by-4.0",
"multilingual",
"government",
"arxiv:2303.03750",
"regio... | dsfsi | The dataset contains editions from the South African government magazine Vuk'uzenzele. Data was scraped from PDFs that have been placed in the data/raw folder. The PDFS were obtained from the Vuk'uzenzele website. | @dataset{marivate_vukosi_2023_7598540, author = {Marivate, Vukosi and Njini, Daniel and Madodonga, Andani and Lastrucci, Richard and Dzingirai, Isheanesu Rajab, Jenalea}, title = {The Vuk'uzenzele South African Multilingual Corpus}, month = feb, year = 2023, publisher = {Zenodo}, doi = {10.5281/zenodo.7598539}, url = {https://doi.org/10.5281/zenodo.7598539} } | null | 2 | 25 | ---
language:
- eng
- afr
- nbl
- xho
- zul
- nso
- sep
- tsn
- ssw
- ven
- tso
pretty_name: "The Vuk'uzenzele South African Multilingual Corpus"
tags:
- multilingual
- government
license: "cc-by-4.0"
task_categories:
- translation
arxiv: 2303.03750
---
# The Vuk'uzenzele South African Multilingual Corpus
## About Dataset
The dataset was obtained from the South African government magazine Vuk'uzenzele, created by the [Government Communication and Information System (GCIS)](https://www.gcis.gov.za/).
The original raw PDFs were obtatined from the [Vuk'uzenzele website](https://www.vukuzenzele.gov.za/).
The datasets contain government magazine editions in 11 languages, namely:
| Language | Code | Language | Code |
|------------|-------|------------|-------|
| English | (eng) | Sepedi | (nso) |
| Afrikaans | (afr) | Setswana | (tsn) |
| isiNdebele | (nbl) | Siswati | (ssw) |
| isiXhosa | (xho) | Tshivenda | (ven) |
| isiZulu | (zul) | Xitstonga | (tso) |
| Sesotho | (sot) |
**Note:** The languages use the ISO 639-2 language codes.
The data is split by language in JSONL format and each row is of the form:
```
{
"title": "Title for article",
"author": "Author Name or Vukuzenzele",
"text": "Article text",
"edition": "Linked Magazine edition",
"language_code": "ISO 639-2 language code"
}
```
## Disclaimer
This dataset contains machine-readable data extracted from PDF documents, from https://www.vukuzenzele.gov.za/, provided by the Government Communication Information System (GCIS). While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Government Communication Information System (GCIS) bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
## Authors
- Vukosi Marivate - [@vukosi](https://twitter.com/vukosi)
- Andani Madodonga
- Daniel Njini
- Richard Lastrucci
- Isheanesu Dzingirai
- Jenalea Rajab
## Citation
**Paper**
[Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora](https://arxiv.org/pdf/2303.03750)
> @inproceedings{lastrucci-etal-2023-preparing,
title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora",
author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate",
booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.rail-1.3",
pages = "18--25"
}
**Dataset**
Vukosi Marivate, Andani Madodonga, Daniel Njini, Richard Lastrucci, Isheanesu Dzingirai, Jenalea Rajab. **The Vuk'uzenzele South African Multilingual Corpus**, 2023
> @dataset{marivate_vukosi_2023_7598540,
author = {Marivate, Vukosi and
Njini, Daniel and
Madodonga, Andani and
Lastrucci, Richard and
Dzingirai, Isheanesu
Rajab, Jenalea},
title = {The Vuk'uzenzele South African Multilingual Corpus},
month = feb,
year = 2023,
publisher = {Zenodo},
doi = {10.5281/zenodo.7598539},
url = {https://doi.org/10.5281/zenodo.7598539}
}
Licences
-------
* License for Data - [CC 4.0 BY](LICENSE.data.md)
* Licence for Code - [MIT License](LICENSE.md)
|
luisroque/instruct-python-llama2-500k | 2023-08-18T09:44:26.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | luisroque | null | null | null | 1 | 25 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1046127202
num_examples: 501349
download_size: 530786217
dataset_size: 1046127202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- en
pretty_name: Instruct Python 500k
size_categories:
- 100K<n<1M
---
# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 500k instructions.
### Structure
- **Question-Answer Pairing**: Questions and answers are paired using the `ParentId` linkage.
- **Quality Focus**: Only top-rated answers for each question are retained.
- **HTML Tag Removal**: All HTML tags in the content are removed.
- **Combined Question Field**: Each question's title and body are merged.
- **Filtering**: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- `score_question`
- `score_answer`
- `question`
- `answer`
### Llama2 Transformation
The dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:
`<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]`
Where:
- `system_prompt` gives context or instructions to the model.
- `user_message` is the user's query following the system prompt, expecting a particular response from the model.
This structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the `python` tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). Attribution is required. The original dataset was posted [here](https://www.kaggle.com/datasets/stackoverflow/pythonquestions).
Keep in touch: [LinkedIn](https://www.linkedin.com/in/luisbrasroque/) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.