id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
myoutdooremail/llama2_finetuner | 2023-08-16T06:35:10.000Z | [
"region:us"
] | myoutdooremail | null | null | null | 0 | 14 | Entry not found |
TinyPixel/lima-m | 2023-09-26T03:38:33.000Z | [
"region:us"
] | TinyPixel | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2956460
num_examples: 1030
download_size: 1700295
dataset_size: 2956460
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lima-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
luisroque/instruct-python-500k | 2023-08-18T09:44:42.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | luisroque | null | null | null | 2 | 14 | ---
dataset_info:
features:
- name: score_question
dtype: int16
- name: score_answer
dtype: int16
- name: question
dtype: string
- name: answer
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 987469369
num_examples: 501349
download_size: 550185963
dataset_size: 987469369
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- en
pretty_name: Instruct Python 500k
size_categories:
- 100K<n<1M
---
# Fine-tuning Instruct Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow.
### Structure
- **Question-Answer Pairing**: Questions and answers are paired using the `ParentId` linkage.
- **Quality Focus**: Only top-rated answers for each question are retained.
- **HTML Tag Removal**: All HTML tags in the content are removed.
- **Combined Question Field**: Each question's title and body are merged.
- **Filtering**: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- `score_question`
- `score_answer`
- `question`
- `answer`
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the `python` tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). Attribution is required. The original dataset was posted [here](https://www.kaggle.com/datasets/stackoverflow/pythonquestions).
Keep in touch: [LinkedIn](https://www.linkedin.com/in/luisbrasroque/) |
desik98/telugu_paraphrase_instruction_tune_iith | 2023-08-18T10:57:40.000Z | [
"region:us"
] | desik98 | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 140868
num_examples: 516
download_size: 50573
dataset_size: 140868
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "telugu_paraphrase_instruction_tune_iith"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZhankuiHe/reddit_movie_raw | 2023-08-19T03:53:31.000Z | [
"task_categories:conversational",
"language:en",
"recommendation",
"arxiv:2001.08435",
"region:us"
] | ZhankuiHe | null | null | null | 0 | 14 | ---
task_categories:
- conversational
language:
- en
tags:
- recommendation
viewer: false
---
# Dataset Card for `Reddit-Movie-raw`
## Dataset Description
- **Homepage:** https://github.com/AaronHeee/LLMs-as-Zero-Shot-Conversational-RecSys
- **Repository:** https://github.com/AaronHeee/LLMs-as-Zero-Shot-Conversational-RecSys
- **Paper:** To appear
- **Point of Contact:** zhh004@eng.ucsd.edu
### Dataset Summary
This dataset provides the raw text from [Reddit](https://reddit.com) related to movie recommendation conversations.
The dataset is extracted from the data dump of [pushshift.io](https://arxiv.org/abs/2001.08435) and only for research use.
### Disclaimer
⚠️ **Please note that conversations processed from Reddit raw data may include content that is not entirely conducive to a positive experience (e.g., toxic speech). Exercise caution and discretion when utilizing this information.**
### Folder Structure
We explain our data folder as follows:
```bash
reddit_movie_raw
├── IMDB-database
│ ├── clean.py # script to obtain clean IMDB movie titles, which can be used for movie name matching if needed.
│ ├── movie_clean.tsv # results after movie title cleaning
│ ├── title.basics.tsv # original movie title information from IMDB
│ └── title.ratings.tsv # # original movie title and rating information from IMDB
├── Reddit-Movie-large
│ ├── sentences.jsonl # raw sentences from the subreddit/* data, it can be used for following processing
│ └── subreddit # raw text from different subreddits from Jan. 2012 to Dec. 2022 (large)
│ ├── bestofnetflix.jsonl
│ ├── movies.jsonl
│ ├── moviesuggestions.jsonl
│ ├── netflixbestof.jsonl
│ └── truefilm.jsonl
└── Reddit-Movie-small
├── sentences.jsonl # raw sentences from the subreddit/* data, it can be used for following processing
└── subreddit # raw text from different subreddits from Jan. 2022 to Dec. 2022 (small)
├── bestofnetflix.jsonl
├── movies.jsonl
├── moviesuggestions.jsonl
├── netflixbestof.jsonl
└── truefilm.jsonl
```
### Data Processing
We also provide first-version processed Reddit-Movie datasets as [Reddit-Movie-small-V1]() and [Reddit-Movie-large-V1]().
Join us if you want to improve the processing quality as well!
### Citation Information
Please cite these two papers if you used this raw data, thanks!
```bib
@inproceedings{baumgartner2020pushshift,
title={The pushshift reddit dataset},
author={Baumgartner, Jason and Zannettou, Savvas and Keegan, Brian and Squire, Megan and Blackburn, Jeremy},
booktitle={Proceedings of the international AAAI conference on web and social media},
volume={14},
pages={830--839},
year={2020}
}
```
```bib
@inproceedings{he23large,
title = Large language models as zero-shot conversational recommenders",
author = "Zhankui He and Zhouhang Xie and Rahul Jha and Harald Steck and Dawen Liang and Yesu Feng and Bodhisattwa Majumder and Nathan Kallus and Julian McAuley",
year = "2023",
booktitle = "CIKM"
}
```
Please contact [Zhankui He](https://aaronheee.github.io) if you have any questions or suggestions. |
highnote/pubmed_qa | 2023-08-19T13:28:27.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<... | highnote | PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative
statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts.
PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances.
Each PubMedQA instance is composed of (1) a question which is either an existing research article
title or derived from one, (2) a context which is the corresponding abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question,
and (4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their
quantitative contents, is required to answer the questions. | @inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
} | null | 1 | 14 | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: pubmedqa
pretty_name: PubMedQA
dataset_info:
- config_name: pqa_labeled
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: reasoning_required_pred
dtype: string
- name: reasoning_free_pred
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 2089200
num_examples: 1000
download_size: 687882700
dataset_size: 2089200
- config_name: pqa_unlabeled
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
splits:
- name: train
num_bytes: 125938502
num_examples: 61249
download_size: 687882700
dataset_size: 125938502
- config_name: pqa_artificial
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 443554667
num_examples: 211269
download_size: 687882700
dataset_size: 443554667
config_names:
- pqa_artificial
- pqa_labeled
- pqa_unlabeled
duplicated_from: pubmed_qa
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PUBMED_QA homepage](https://pubmedqa.github.io/ )
- **Repository:** [PUBMED_QA repository](https://github.com/pubmedqa/pubmedqa)
- **Paper:** [PUBMED_QA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/abs/1909.06146)
- **Leaderboard:** [PUBMED_QA: Leaderboard](https://pubmedqa.github.io/)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset. |
ymoslem/MedicalSciences-StackExchange | 2023-08-20T17:26:24.000Z | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:sentence-similarity",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-sa-4.0",
"medical",
"region:us"
] | ymoslem | null | null | null | 2 | 14 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
- text-classification
- sentence-similarity
language:
- en
tags:
- medical
pretty_name: Medical Sciences StackExchange Questions & Answers
size_categories:
- 1K<n<10K
---
All StackExchange questions and their answers from the Medical Sciences site, up to 14 August 2023. The repository includes a notebook for the process using the official StackExchange API. |
dim/ru_turbo_alpaca_evol_instruct_3k | 2023-08-20T19:04:41.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 14 | ---
license: mit
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: iteration
dtype: int64
splits:
- name: train
num_bytes: 6677510
num_examples: 3000
download_size: 3214805
dataset_size: 6677510
---
|
dim/ru_turbo_saiga_3k | 2023-08-20T19:10:49.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 14 | ---
license: mit
dataset_info:
features:
- name: messages
sequence:
- name: role
dtype: string
- name: content
dtype: string
- name: seed
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 6765306.998239436
num_examples: 3000
download_size: 3091422
dataset_size: 6765306.998239436
---
|
reciprocate/gsm8k_pairwise | 2023-08-23T20:23:06.000Z | [
"region:us"
] | reciprocate | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 106512
num_examples: 128
download_size: 65268
dataset_size: 106512
---
# Dataset Card for "gsm8k_pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tasksource/data | 2023-09-12T07:38:43.000Z | [
"license:other",
"region:us"
] | tasksource | null | null | null | 1 | 14 | ---
license: other
---
# Tasksource unified loader
```python
load_dataset('tasksource/data', "glue/rte",max_rows=30_00)
``` |
Sentdex/wsb_reddit_v001 | 2023-08-25T17:56:31.000Z | [
"license:apache-2.0",
"region:us"
] | Sentdex | null | null | null | 3 | 14 | ---
license: apache-2.0
---
This is Approximately 2017ish to 2018ish /r/wallstreetbets subreddit comment/reply data that had at least a few upvotes. Other than filtering for parent/reply pairs and a minimum threshold of votes (5), no effort has been made on this version to improve the quality of the dataset. Will possibly try to improve the quality in future versions. |
duwuonline/UIT-VSMEC | 2023-08-28T09:14:35.000Z | [
"task_categories:text-classification",
"language:vi",
"license:other",
"sentiment",
"classificati",
"region:us"
] | duwuonline | null | null | null | 0 | 14 | ---
license: other
language:
- vi
tags:
- sentiment
- classificati
task_categories:
- text-classification
---
## Model description
This data from UIT aka University of Information Technology
It contain 7 class 'Other', 'Disgust', 'Enjoyment', 'Anger', 'Surprise', 'Sadness', 'Fear'
## Contributions
Thanks to ViDataset - Vietnamese Datasets for Natural Language Processing for sharing this dataset.
|
nayohan/kullm-v2_ppl | 2023-08-28T19:55:39.000Z | [
"region:us"
] | nayohan | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: ppl
dtype: float64
- name: len
dtype: int64
splits:
- name: train
num_bytes: 215898169
num_examples: 144069
download_size: 108642849
dataset_size: 215898169
---
# Dataset Card for "kullm-v2_ppl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chiragtubakad/chart-to-table | 2023-08-30T12:29:04.000Z | [
"region:us"
] | chiragtubakad | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 39979829.0
num_examples: 990
download_size: 33926492
dataset_size: 39979829.0
---
# Dataset Card for "chart-to-table"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/grade_school_math_instructions_3k | 2023-08-31T19:22:37.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 14 | ---
license: mit
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 1639530.0272975431
num_examples: 3000
download_size: 867611
dataset_size: 1639530.0272975431
---
|
dim/tldr_17_3k | 2023-08-31T19:33:59.000Z | [
"region:us"
] | dim | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: author
dtype: string
- name: body
dtype: string
- name: normalizedBody
dtype: string
- name: subreddit
dtype: string
- name: subreddit_id
dtype: string
- name: id
dtype: string
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 14761884.702975057
num_examples: 3000
download_size: 9479190
dataset_size: 14761884.702975057
---
# Dataset Card for "tldr_17_3k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/tldr_news_3k | 2023-08-31T19:37:18.000Z | [
"region:us"
] | dim | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: headline
dtype: string
- name: content
dtype: string
- name: category
dtype:
class_label:
names:
'0': Sponsor
'1': Big Tech & Startups
'2': Science and Futuristic Technology
'3': Programming, Design & Data Science
'4': Miscellaneous
splits:
- name: train
num_bytes: 1681328.9436817036
num_examples: 3000
download_size: 1064733
dataset_size: 1681328.9436817036
---
# Dataset Card for "tldr_news_3k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/grade_school_math_instructions_ru_3k | 2023-08-31T20:03:30.000Z | [
"region:us"
] | dim | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2736097.1497390606
num_examples: 3000
download_size: 1313700
dataset_size: 2736097.1497390606
---
# Dataset Card for "grade_school_math_instructions_ru_3k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/dialogsum_ru_3k | 2023-08-31T20:06:48.000Z | [
"region:us"
] | dim | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 4602365.489566613
num_examples: 3000
download_size: 2244730
dataset_size: 4602365.489566613
---
# Dataset Card for "dialogsum_ru_3k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/HC3_ru_8k | 2023-08-31T20:48:03.000Z | [
"region:us"
] | dim | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: human_answers
sequence: string
- name: chatgpt_answers
sequence: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 44537809.06175479
num_examples: 8000
download_size: 21121279
dataset_size: 44537809.06175479
---
# Dataset Card for "HC3_ru_8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jisoul/medical_consultation | 2023-09-15T07:29:44.000Z | [
"region:us"
] | jisoul | null | null | null | 0 | 14 | Entry not found |
zxvix/c4_biomedical | 2023-09-03T09:53:11.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 3949195.0
num_examples: 1000
download_size: 2366762
dataset_size: 3949195.0
---
# Dataset Card for "c4_biomedical"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chengli-thu/linghuchong | 2023-09-03T01:57:53.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:cc-by-4.0",
"arxiv:2308.09597",
"region:us"
] | chengli-thu | null | null | null | 1 | 14 | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- zh
size_categories:
- 1K<n<10K
---
支持ChatHaruhi2 的令狐冲数据,可以使用如下方式调用
```python
from chatharuhi import ChatHaruhi
chatbot = ChatHaruhi( role_from_hf = 'chengli-thu/linghuchong', \
llm = 'openai')
response = chatbot.chat(role='小师妹', text = '冲哥。')
print(response)
```
上传者: 李鲁鲁
更具体的信息,见 [ChatHaruhi](https://github.com/LC1332/Chat-Haruhi-Suzumiya)
欢迎加入我们的 [众筹角色创建项目](https://github.com/LC1332/Chat-Haruhi-Suzumiya/tree/main/characters/novel_collecting)
### Citation引用
Please cite the repo if you use the data or code in this repo.
```
@misc{li2023chatharuhi,
title={ChatHaruhi: Reviving Anime Character in Reality via Large Language Model},
author={Cheng Li and Ziang Leng and Chenxi Yan and Junyi Shen and Hao Wang and Weishi MI and Yaying Fei and Xiaoyang Feng and Song Yan and HaoSheng Wang and Linkang Zhan and Yaokai Jia and Pingyu Wu and Haozhen Sun},
year={2023},
eprint={2308.09597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
zxvix/c4_academic | 2023-09-03T09:55:26.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 3204161.0
num_examples: 1000
download_size: 1998132
dataset_size: 3204161.0
---
# Dataset Card for "c4_academic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hjerpe/github-kubeflow-issues | 2023-09-13T18:41:35.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:topic-classification",
"task_ids:intent-classification",
"task_ids:multi-label-classification",
"task_ids:multi-class-classification",
"annotations_creators:no-annota... | hjerpe | null | null | null | 0 | 14 | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- en
license: []
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- topic-classification
- intent-classification
- multi-label-classification
- multi-class-classification
pretty_name: github-kubeflow-pipelines-issues
tags:
- GitHub-Issues
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: labels_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: description
dtype: string
- name: creator
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: open_issues
dtype: int64
- name: closed_issues
dtype: int64
- name: state
dtype: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: due_on
dtype: timestamp[s]
- name: closed_at
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 9230693
num_examples: 1567
download_size: 0
dataset_size: 9230693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
## Dataset Description
- **Point of Contact:** [Adam Hjerpe](hjerpeadam5@gmail.com)
### Dataset Summary
GitHub Issues is a dataset consisting of the top 5_000 GitHub issues, as of 2023-09-02, associated with the KubeFlow Pipelines [repository](https://github.com/kubeflow/pipelines). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
### Languages
Contains language commonly found in English software development.
### Contributions
Thanks to [@hjerpe](https://github.com/hjerpe) for adding this dataset. |
nayohan/OIG-small-chip2-ko_ppl | 2023-09-03T09:12:30.000Z | [
"region:us"
] | nayohan | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: user
dtype: string
- name: chip2
dtype: string
- name: index
dtype: int64
- name: user_translated
dtype: string
- name: chip2_translated
dtype: string
- name: ppl
dtype: float64
- name: len
dtype: int64
splits:
- name: train
num_bytes: 183258239
num_examples: 210282
download_size: 110338193
dataset_size: 183258239
---
# Dataset Card for "OIG-small-chip2-ko_ppl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TokenBender/unnatural_code_instructions_20M_tokens_separate | 2023-09-05T15:08:52.000Z | [
"license:apache-2.0",
"region:us"
] | TokenBender | null | null | null | 2 | 14 | ---
license: apache-2.0
---
|
TokenBender/roleplay_raw_text | 2023-09-24T20:48:28.000Z | [
"license:apache-2.0",
"region:us"
] | TokenBender | null | null | null | 1 | 14 | ---
license: apache-2.0
---
|
ksabeh/ae-100k-dataset | 2023-09-08T15:04:28.000Z | [
"region:us"
] | ksabeh | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: text
dtype: string
- name: attribute
dtype: string
- name: value
dtype: string
- name: id
dtype: int64
- name: text_value
dtype: string
- name: text_attribute
dtype: string
splits:
- name: train
num_bytes: 77020707
num_examples: 178395
- name: validation
num_bytes: 9599202
num_examples: 22270
- name: test
num_bytes: 9515754
num_examples: 22117
download_size: 15992175
dataset_size: 96135663
---
# Dataset Card for "ae-100k-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
narySt/CommitChronicle_valPretrained | 2023-09-07T14:24:45.000Z | [
"region:us"
] | narySt | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: message
dtype: string
- name: model_input
dtype: string
- name: input_ids
sequence: int64
- name: attention_mask
sequence: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1432270229
num_examples: 109505
download_size: 81607985
dataset_size: 1432270229
---
# Dataset Card for "CommitChronicle_valPretrained"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Linhz/qg_vimmrc | 2023-09-08T03:35:32.000Z | [
"region:us"
] | Linhz | null | null | null | 0 | 14 | Entry not found |
daeell/embedding-test | 2023-09-08T10:05:55.000Z | [
"language:en",
"language:ko",
"license:mit",
"region:us"
] | daeell | null | null | null | 0 | 14 | ---
license: mit
language:
- en
- ko
--- |
OneFly7/llama2-politosphere-fine-tuning-system-prompt-without-definition | 2023-09-10T09:04:00.000Z | [
"region:us"
] | OneFly7 | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 64460
num_examples: 113
- name: validation
num_bytes: 62208
num_examples: 113
download_size: 35724
dataset_size: 126668
---
# Dataset Card for "llama2-politosphere-fine-tuning-system-prompt_without_definition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
OneFly7/llama2-politosphere-fine-tuning-pol-unpol-oth | 2023-09-10T18:19:22.000Z | [
"region:us"
] | OneFly7 | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 153314
num_examples: 113
- name: validation
num_bytes: 151948
num_examples: 113
download_size: 59375
dataset_size: 305262
---
# Dataset Card for "llama2-politosphere-fine-tuning-pol-unpol-oth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JosephLee/elementary-science-textbook | 2023-09-11T12:59:47.000Z | [
"region:us"
] | JosephLee | null | null | null | 1 | 14 | Entry not found |
JeisonJA/CSV_TRAIN_FORMAT | 2023-09-11T23:52:29.000Z | [
"license:apache-2.0",
"region:us"
] | JeisonJA | null | null | null | 0 | 14 | ---
license: apache-2.0
---
|
OptimusKoala/topachat_v2 | 2023-09-13T09:41:38.000Z | [
"license:apache-2.0",
"region:us"
] | OptimusKoala | null | null | null | 0 | 14 | ---
license: apache-2.0
---
|
c123ian/dell_qa | 2023-09-13T16:05:13.000Z | [
"region:us"
] | c123ian | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 48917221
num_examples: 45560
download_size: 28797124
dataset_size: 48917221
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xdokkax/vb6_csharp | 2023-09-14T05:27:03.000Z | [
"license:mit",
"region:us"
] | xdokkax | null | null | null | 0 | 14 | ---
license: mit
---
|
ahmed000000000/mr-llama | 2023-09-14T09:22:21.000Z | [
"region:us"
] | ahmed000000000 | null | null | null | 0 | 14 | Entry not found |
warshakhan/donut_vqa_ISynHMP | 2023-09-15T07:12:51.000Z | [
"task_categories:visual-question-answering",
"language:en",
"license:unknown",
"medical",
" prescriptions",
"region:us"
] | warshakhan | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 578804498
num_examples: 2800
- name: valid
num_bytes: 85350687
num_examples: 400
- name: test
num_bytes: 172300907
num_examples: 800
download_size: 804418576
dataset_size: 836456092
license: unknown
task_categories:
- visual-question-answering
language:
- en
tags:
- medical
- ' prescriptions'
---
# Dataset Card for "donut_vqa_ISynHMP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pszemraj/simpleRW-lite | 2023-09-15T04:57:09.000Z | [
"source_datasets:pszemraj/simple_wikipedia_LM",
"source_datasets:pszemraj/refinedweb-3m-deduped-split",
"region:us"
] | pszemraj | null | null | null | 1 | 14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1136718026.846949
num_examples: 452484
- name: validation
num_bytes: 30473651.26394911
num_examples: 11908
- name: test
num_bytes: 30471237.904544305
num_examples: 11908
download_size: 538623929
dataset_size: 1197662916.0154426
source_datasets:
- pszemraj/simple_wikipedia_LM
- pszemraj/refinedweb-3m-deduped-split
---
# Dataset Card for "simpleRW-lite"
interleaved simple wikipedia LM + refinedweb-3m
```python
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 452484
})
validation: Dataset({
features: ['text'],
num_rows: 11908
})
test: Dataset({
features: ['text'],
num_rows: 11908
})
})
```
train
```
Descriptive Stats Using Pandas:
count 452484.000000
mean 430.923633
std 1391.959655
min 0.000000
25% 83.000000
50% 175.000000
75% 432.000000
max 135922.000000
dtype: float64
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chin-may/audio-1 | 2023-09-15T09:39:30.000Z | [
"region:us"
] | chin-may | null | null | null | 0 | 14 | Entry not found |
PL-MTEB/plsc-clustering-p2p | 2023-09-15T12:19:01.000Z | [
"license:cc0-1.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 14 | ---
license: cc0-1.0
---
|
chenqile09/llama2-chinese-couplet-100k | 2023-09-17T22:03:11.000Z | [
"region:us"
] | chenqile09 | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 33921909.405820444
num_examples: 100000
- name: validation
num_bytes: 1358512
num_examples: 4000
download_size: 13630532
dataset_size: 35280421.405820444
---
# Dataset Card for "llama2-chinese-couplet-100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AI4EPS/quakeflow_demo | 2023-09-17T06:55:07.000Z | [
"license:mit",
"region:us"
] | AI4EPS | A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format. | @InProceedings{huggingface:dataset,
title = {NCEDC dataset for QuakeFlow},
author={Zhu et al.},
year={2023}
} | null | 0 | 14 | ---
license: mit
---
|
Falah/fox_0_prompts | 2023-09-17T12:38:53.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 3047
num_examples: 14
download_size: 3558
dataset_size: 3047
---
# Dataset Card for "fox_0_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kewu93/three_styles_coded | 2023-09-19T23:37:11.000Z | [
"region:us"
] | kewu93 | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 60392465.7
num_examples: 2100
- name: val
num_bytes: 25916253.5
num_examples: 900
download_size: 84975483
dataset_size: 86308719.2
---
# Dataset Card for "three_styles_coded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sanctia/finesse_image_generation | 2023-09-21T12:01:02.000Z | [
"region:us"
] | sanctia | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3681350830.818
num_examples: 1389
download_size: 3170381883
dataset_size: 3681350830.818
---
# Dataset Card for "finesse_image_generation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tuxmx/sismos-mexico.csv | 2023-09-21T03:46:27.000Z | [
"region:us"
] | tuxmx | null | null | null | 0 | 14 | Entry not found |
Dippi9845/arxiv-no-stop-word-2 | 2023-09-20T17:13:45.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | Dippi9845 | null | null | null | 0 | 14 | ---
license: cc-by-nc-4.0
---
|
tuxmx/artists | 2023-09-21T14:28:26.000Z | [
"region:us"
] | tuxmx | null | null | null | 0 | 14 | Entry not found |
Falah/photojournalism_fisherwoman | 2023-09-21T07:55:23.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1219362
num_examples: 10000
download_size: 25612
dataset_size: 1219362
---
# Dataset Card for "photojournalism_fisherwoman"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jkv53/13F_Reports_with_labels | 2023-09-22T15:01:37.000Z | [
"region:us"
] | jkv53 | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: title
dtype: string
- name: body
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 12642773
num_examples: 1113
download_size: 3334911
dataset_size: 12642773
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "13F_Reports_with_labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SebastianMoncaleano/cammel_json | 2023-09-22T21:59:17.000Z | [
"region:us"
] | SebastianMoncaleano | null | null | null | 0 | 14 | Entry not found |
vjain/RAG-LANGCHAIN | 2023-09-25T18:26:45.000Z | [
"license:mit",
"region:us"
] | vjain | null | null | null | 0 | 14 | ---
license: mit
---
|
lonestar108/plenumvideos | 2023-09-24T09:52:01.000Z | [
"region:us"
] | lonestar108 | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 583798
num_examples: 133
download_size: 267509
dataset_size: 583798
---
# Dataset Card for "plenumvideos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
roupenminassian/vehicle-pedestrian-object-detection | 2023-09-24T12:11:22.000Z | [
"region:us"
] | roupenminassian | null | null | null | 0 | 14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: int64
- name: width
dtype: int64
- name: height
dtype: int64
- name: objects
struct:
- name: id
sequence: int64
- name: area
sequence: float64
- name: bbox
sequence:
sequence: float64
- name: category
sequence: int64
splits:
- name: train
num_bytes: 73699392.0
num_examples: 200
download_size: 73671552
dataset_size: 73699392.0
---
# Dataset Card for "vehicle-pedestrian-object-detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hemann/Illusion_diffusion | 2023-10-02T11:25:25.000Z | [
"region:us"
] | Hemann | null | null | null | 0 | 14 | Entry not found |
mikeee/en-zh-nyt31k | 2023-09-24T12:56:28.000Z | [
"region:us"
] | mikeee | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: english
dtype: string
- name: chinese
dtype: string
splits:
- name: train
num_bytes: 15197924
num_examples: 31449
download_size: 10056620
dataset_size: 15197924
---
# Dataset Card for "en-zh-nyt31k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pharaouk/antidata | 2023-09-25T02:13:26.000Z | [
"region:us"
] | pharaouk | null | null | null | 0 | 14 | Entry not found |
shraddha18/training_dataset_without_decoded_Qlora_v2 | 2023-09-26T06:10:55.000Z | [
"license:apache-2.0",
"region:us"
] | shraddha18 | null | null | null | 0 | 14 | ---
license: apache-2.0
---
|
bigbio/czi_drsm | 2023-09-26T13:46:34.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | bigbio | Research Article document classification dataset based on aspects of disease research. Currently, the dataset consists of three subsets:
(A) classifies title/abstracts of papers into most popular subtypes of clinical, basic, and translational papers (~20k papers);
- Clinical Characteristics, Disease Pathology, and Diagnosis -
Text that describes (A) symptoms, signs, or ‘phenotype’ of a disease;
(B) the effects of the disease on patient organs, tissues, or cells;
(C) the results of clinical tests that reveal pathology (including
biomarkers); (D) research that use this information to figure out
a diagnosis.
- Therapeutics in the clinic -
Text describing how treatments work in the clinic (but not in a clinical trial).
- Disease mechanism -
Text that describes either (A) mechanistic involvement of specific genes in disease
(deletions, gain of function, etc); (B) how molecular signalling or metabolism
binding, activating, phosphorylation, concentration increase, etc.)
are involved in the mechanism of a disease; or (C) the physiological
mechanism of disease at the level of tissues, organs, and body systems.
- Patient-Based Therapeutics -
Text describing (A) Clinical trials (studies of therapeutic measures being
used on patients in a clinical trial); (B) Post Marketing Drug Surveillance
(effects of a drug after approval in the general population or as part of
‘standard healthcare’); (C) Drug repurposing (how a drug that has been
approved for one use is being applied to a new disease).
(B) identifies whether a title/abstract of a paper describes substantive research into Quality of Life (~10k papers);
- -1 - the paper is not a primary experimental study in rare disease
- 0 - the study does not directly investigate quality of life
- 1 - the study investigates qol but not as its primary contribution
- 2 - the study's primary contribution centers on quality of life measures
(C) identifies if a paper is a natural history study (~10k papers).
` - -1 - the paper is not a primary experimental study in rare disease
- 0 - the study is not directly investigating the natural history of a disease
- 1 - the study includes some elements a natural history but not as its primary contribution
- 2 - the study's primary contribution centers on observing the time course of a rare disease
These classifications are particularly relevant in rare disease research, a field that is generally understudied. | @article{,
author = {},
title = {},
journal = {},
volume = {},
year = {},
url = {},
doi = {},
biburl = {},
bibsource = {}
} | null | 0 | 14 | ---
language:
- en
bigbio_language:
- English
license: cc0-1.0
bigbio_license_shortname: cc0-1.0
multilinguality: monolingual
pretty_name: CZI DRSM
homepage: https://github.com/chanzuckerberg/DRSM-corpus
bigbio_pubmed: false
bigbio_public: true
bigbio_tasks:
- TXTCLASS
---
# Dataset Card for CZI DRSM
[README.md](..%2Fmed_qa%2FREADME.md)
## Dataset Description
- **Homepage:** https://github.com/chanzuckerberg/DRSM-corpus
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
Research Article document classification dataset based on aspects of disease research. Currently, the dataset consists of three subsets:
(A) classifies title/abstracts of papers into most popular subtypes of clinical, basic, and translational papers (~20k papers);
- Clinical Characteristics, Disease Pathology, and Diagnosis -
Text that describes (A) symptoms, signs, or ‘phenotype’ of a disease;
(B) the effects of the disease on patient organs, tissues, or cells;
(C) the results of clinical tests that reveal pathology (including
biomarkers); (D) research that use this information to figure out
a diagnosis.
- Therapeutics in the clinic -
Text describing how treatments work in the clinic (but not in a clinical trial).
- Disease mechanism -
Text that describes either (A) mechanistic involvement of specific genes in disease
(deletions, gain of function, etc); (B) how molecular signalling or metabolism
binding, activating, phosphorylation, concentration increase, etc.)
are involved in the mechanism of a disease; or (C) the physiological
mechanism of disease at the level of tissues, organs, and body systems.
- Patient-Based Therapeutics -
Text describing (A) Clinical trials (studies of therapeutic measures being
used on patients in a clinical trial); (B) Post Marketing Drug Surveillance
(effects of a drug after approval in the general population or as part of
‘standard healthcare’); (C) Drug repurposing (how a drug that has been
approved for one use is being applied to a new disease).
(B) identifies whether a title/abstract of a paper describes substantive research into Quality of Life (~10k papers);
- -1 - the paper is not a primary experimental study in rare disease
- 0 - the study does not directly investigate quality of life
- 1 - the study investigates qol but not as its primary contribution
- 2 - the study's primary contribution centers on quality of life measures
(C) identifies if a paper is a natural history study (~10k papers).
- -1 - the paper is not a primary experimental study in rare disease
- 0 - the study is not directly investigating the natural history of a disease
- 1 - the study includes some elements a natural history but not as its primary contribution
- 2 - the study's primary contribution centers on observing the time course of a rare disease
These classifications are particularly relevant in rare disease research, a field that is generally understudied.
## Citation Information
```
# N/A
```
|
polinaeterna/tabular-benchmark | 2023-09-28T12:11:36.000Z | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"region:us"
] | polinaeterna | null | null | null | 0 | 14 |
---
annotations_creators: []
license: []
pretty_name: tabular_benchmark
tags: []
task_categories:
- tabular-classification
- tabular-regression
configs:
- config_name: clf_cat_covertype
data_files: clf_cat/covertype.csv
- config_name: clf_num_Higgs
data_files: clf_num/Higgs.csv
---
# Tabular Benchmark
## Dataset Description
This dataset is a curation of various datasets from [openML](https://www.openml.org/) and is curated to benchmark performance of various machine learning algorithms.
- **Repository:** https://github.com/LeoGrin/tabular-benchmark/community
- **Paper:** https://hal.archives-ouvertes.fr/hal-03723551v2/document
### Dataset Summary
Benchmark made of curation of various tabular data learning tasks, including:
- Regression from Numerical and Categorical Features
- Regression from Numerical Features
- Classification from Numerical and Categorical Features
- Classification from Numerical Features
### Supported Tasks and Leaderboards
- `tabular-regression`
- `tabular-classification`
## Dataset Structure
### Data Splits
This dataset consists of four splits (folders) based on tasks and datasets included in tasks.
- reg_num: Task identifier for regression on numerical features.
- reg_cat: Task identifier for regression on numerical and categorical features.
- clf_num: Task identifier for classification on numerical features.
- clf_cat: Task identifier for classification on categorical features.
Depending on the dataset you want to load, you can load the dataset by passing `task_name/dataset_name` to `data_files` argument of `load_dataset` like below:
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files="reg_cat/house_sales.csv")
```
## Dataset Creation
### Curation Rationale
This dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:
- **Heterogeneous columns**. Columns should correspond to features of different nature. This excludes
images or signal datasets where each column corresponds to the same signal on different sensors.
- **Not high dimensional**. We only keep datasets with a d/n ratio below 1/10.
- **Undocumented datasets** We remove datasets where too little information is available. We did keep
datasets with hidden column names if it was clear that the features were heterogeneous.
- **I.I.D. data**. We remove stream-like datasets or time series.
- **Real-world data**. We remove artificial datasets but keep some simulated datasets. The difference is
subtle, but we try to keep simulated datasets if learning these datasets are of practical importance
(like the Higgs dataset), and not just a toy example to test specific model capabilities.
- **Not too small**. We remove datasets with too few features (< 4) and too few samples (< 3 000). For
benchmarks on numerical features only, we remove categorical features before checking if enough
features and samples are remaining.
- **Not too easy**. We remove datasets which are too easy. Specifically, we remove a dataset if a simple model (max of a single tree and a regression, logistic or OLS)
reaches a score whose relative difference with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn)
is below 5%. Other benchmarks use different metrics to remove too easy datasets, like removing datasets perfectly separated by a single decision classifier [Bischl et al., 2021],
but this ignores varying Bayes rate across datasets. As tree ensembles are superior to simple trees and logistic regresison [Fernández-Delgado et al., 2014],
a close score for the simple and powerful models suggests that we are already close to the best achievable score.
- **Not deterministic**. We remove datasets where the target is a deterministic function of the data. This
mostly means removing datasets on games like poker and chess. Indeed, we believe that these
datasets are very different from most real-world tabular datasets, and should be studied separately
### Source Data
**Numerical Classification**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|electricity|38474.0|7.0|https://www.openml.org/d/151|https://www.openml.org/d/44120|
|covertype|566602.0|10.0|https://www.openml.org/d/293|https://www.openml.org/d/44121|
|pol|10082.0|26.0|https://www.openml.org/d/722|https://www.openml.org/d/44122|
|house_16H|13488.0|16.0|https://www.openml.org/d/821|https://www.openml.org/d/44123|
|MagicTelescope|13376.0|10.0|https://www.openml.org/d/1120|https://www.openml.org/d/44125|
|bank-marketing|10578.0|7.0|https://www.openml.org/d/1461|https://www.openml.org/d/44126|
|Bioresponse|3434.0|419.0|https://www.openml.org/d/4134|https://www.openml.org/d/45019|
|MiniBooNE|72998.0|50.0|https://www.openml.org/d/41150|https://www.openml.org/d/44128|
|default-of-credit-card-clients|13272.0|20.0|https://www.openml.org/d/42477|https://www.openml.org/d/45020|
|Higgs|940160.0|24.0|https://www.openml.org/d/42769|https://www.openml.org/d/44129|
|eye_movements|7608.0|20.0|https://www.openml.org/d/1044|https://www.openml.org/d/44130|
|Diabetes130US|71090.0|7.0|https://www.openml.org/d/4541|https://www.openml.org/d/45022|
|jannis|57580.0|54.0|https://www.openml.org/d/41168|https://www.openml.org/d/45021|
|heloc|10000.0|22.0|"https://www.kaggle.com/datasets/averkiyoliabev/home-equity-line-of-creditheloc?select=heloc_dataset_v1+%281%29.csv"|https://www.openml.org/d/45026|
|credit|16714.0|10.0|"https://www.kaggle.com/c/GiveMeSomeCredit/data?select=cs-training.csv"|https://www.openml.org/d/44089|
|california|20634.0|8.0|"https://www.dcc.fc.up.pt/ltorgo/Regression/cal_housing.html"|https://www.openml.org/d/45028|
**Categorical Classification**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|electricity|38474.0|8.0|https://www.openml.org/d/151|https://www.openml.org/d/44156|
|eye_movements|7608.0|23.0|https://www.openml.org/d/1044|https://www.openml.org/d/44157|
|covertype|423680.0|54.0|https://www.openml.org/d/1596|https://www.openml.org/d/44159|
|albert|58252.0|31.0|https://www.openml.org/d/41147|https://www.openml.org/d/45035|
|compas-two-years|4966.0|11.0|https://www.openml.org/d/42192|https://www.openml.org/d/45039|
|default-of-credit-card-clients|13272.0|21.0|https://www.openml.org/d/42477|https://www.openml.org/d/45036|
|road-safety|111762.0|32.0|https://www.openml.org/d/42803|https://www.openml.org/d/45038|
**Numerical Regression**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|cpu_act|8192.0|21.0|https://www.openml.org/d/197|https://www.openml.org/d/44132|
|pol|15000.0|26.0|https://www.openml.org/d/201|https://www.openml.org/d/44133|
|elevators|16599.0|16.0|https://www.openml.org/d/216|https://www.openml.org/d/44134|
|wine_quality|6497.0|11.0|https://www.openml.org/d/287|https://www.openml.org/d/44136|
|Ailerons|13750.0|33.0|https://www.openml.org/d/296|https://www.openml.org/d/44137|
|yprop_4_1|8885.0|42.0|https://www.openml.org/d/416|https://www.openml.org/d/45032|
|houses|20640.0|8.0|https://www.openml.org/d/537|https://www.openml.org/d/44138|
|house_16H|22784.0|16.0|https://www.openml.org/d/574|https://www.openml.org/d/44139|
|delays_zurich_transport|5465575.0|9.0|https://www.openml.org/d/40753|https://www.openml.org/d/45034|
|diamonds|53940.0|6.0|https://www.openml.org/d/42225|https://www.openml.org/d/44140|
|Brazilian_houses|10692.0|8.0|https://www.openml.org/d/42688|https://www.openml.org/d/44141|
|Bike_Sharing_Demand|17379.0|6.0|https://www.openml.org/d/42712|https://www.openml.org/d/44142|
|nyc-taxi-green-dec-2016|581835.0|9.0|https://www.openml.org/d/42729|https://www.openml.org/d/44143|
|house_sales|21613.0|15.0|https://www.openml.org/d/42731|https://www.openml.org/d/44144|
|sulfur|10081.0|6.0|https://www.openml.org/d/23515|https://www.openml.org/d/44145|
|medical_charges|163065.0|5.0|https://www.openml.org/d/42720|https://www.openml.org/d/44146|
|MiamiHousing2016|13932.0|14.0|https://www.openml.org/d/43093|https://www.openml.org/d/44147|
|superconduct|21263.0|79.0|https://www.openml.org/d/43174|https://www.openml.org/d/44148|
**Categorical Regression**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|topo_2_1|8885.0|255.0|https://www.openml.org/d/422|https://www.openml.org/d/45041|
|analcatdata_supreme|4052.0|7.0|https://www.openml.org/d/504|https://www.openml.org/d/44055|
|visualizing_soil|8641.0|4.0|https://www.openml.org/d/688|https://www.openml.org/d/44056|
|delays_zurich_transport|5465575.0|12.0|https://www.openml.org/d/40753|https://www.openml.org/d/45045|
|diamonds|53940.0|9.0|https://www.openml.org/d/42225|https://www.openml.org/d/44059|
|Allstate_Claims_Severity|188318.0|124.0|https://www.openml.org/d/42571|https://www.openml.org/d/45046|
|Mercedes_Benz_Greener_Manufacturing|4209.0|359.0|https://www.openml.org/d/42570|https://www.openml.org/d/44061|
|Brazilian_houses|10692.0|11.0|https://www.openml.org/d/42688|https://www.openml.org/d/44062|
|Bike_Sharing_Demand|17379.0|11.0|https://www.openml.org/d/42712|https://www.openml.org/d/44063|
|Airlines_DepDelay_1M|1000000.0|5.0|https://www.openml.org/d/42721|https://www.openml.org/d/45047|
|nyc-taxi-green-dec-2016|581835.0|16.0|https://www.openml.org/d/42729|https://www.openml.org/d/44065|
|abalone|4177.0|8.0|https://www.openml.org/d/42726|https://www.openml.org/d/45042|
|house_sales|21613.0|17.0|https://www.openml.org/d/42731|https://www.openml.org/d/44066|
|seattlecrime6|52031.0|4.0|https://www.openml.org/d/42496|https://www.openml.org/d/45043|
|medical_charges|163065.0|5.0|https://www.openml.org/d/42720|https://www.openml.org/d/45048|
|particulate-matter-ukair-2017|394299.0|6.0|https://www.openml.org/d/42207|https://www.openml.org/d/44068|
|SGEMM_GPU_kernel_performance|241600.0|9.0|https://www.openml.org/d/43144|https://www.openml.org/d/44069|
### Dataset Curators
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux.
### Licensing Information
[More Information Needed]
### Citation Information
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux. Why do tree-based models still outperform deep
learning on typical tabular data?. NeurIPS 2022 Datasets and Benchmarks Track, Nov 2022, New
Orleans, United States. ffhal-03723551v2f
|
Vrushali/guanaco-llama2-1k | 2023-09-27T18:40:28.000Z | [
"region:us"
] | Vrushali | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: Question
dtype: string
- name: Answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1333386
num_examples: 1000
download_size: 565663
dataset_size: 1333386
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ShashiVish/cover-letter-input | 2023-09-29T09:20:43.000Z | [
"region:us"
] | ShashiVish | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: Applicant Name
dtype: string
- name: Role Name
dtype: string
- name: Job Qualification Requirement
dtype: string
- name: Users Working Experience
dtype: string
- name: Hiring Company Name
dtype: string
- name: Applicant List of Skill Set
dtype: string
- name: Cover Letter
dtype: string
splits:
- name: train
num_bytes: 42804
num_examples: 37
download_size: 15643
dataset_size: 42804
---
# Dataset Card for "cover-letter-input"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hanifabdlh/quac-cahya-instructions | 2023-10-02T02:06:56.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 35257767
num_examples: 86354
download_size: 18583029
dataset_size: 35257767
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-cahya-instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Syma25/data2 | 2023-10-04T07:00:30.000Z | [
"region:us"
] | Syma25 | null | null | null | 0 | 14 | Entry not found |
the-rizz/the-rizz-corpus | 2023-10-04T14:43:56.000Z | [
"region:us"
] | the-rizz | null | null | null | 0 | 14 | Entry not found |
Intuit-GenSRF/joangaes-depression | 2023-10-05T01:00:33.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 13387322
num_examples: 27977
download_size: 8155014
dataset_size: 13387322
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "joangaes-depression"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xjlulu/ntu_adl_slot | 2023-10-05T09:07:32.000Z | [
"task_categories:token-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | xjlulu | null | null | null | 0 | 14 | ---
license: apache-2.0
task_categories:
- token-classification
language:
- en
--- |
napatswift/budget-seq2seq-json | 2023-10-05T16:34:27.000Z | [
"region:us"
] | napatswift | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: line_item
sequence: string
- name: target
dtype: string
- name: input
dtype: string
- name: format
dtype: string
splits:
- name: train
num_bytes: 231359400.0
num_examples: 19075
download_size: 47272901
dataset_size: 231359400.0
---
# Dataset Card for "budget-seq2seq-json"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
skrishna/toxicity_preprop | 2023-10-05T20:47:56.000Z | [
"license:mit",
"region:us"
] | skrishna | null | null | null | 0 | 14 | ---
license: mit
---
|
Weyaxi/turk-insult | 2023-10-06T06:39:12.000Z | [
"region:us"
] | Weyaxi | null | null | null | 0 | 14 | Entry not found |
jjonhwa/wikipedia_long | 2023-10-07T07:53:13.000Z | [
"region:us"
] | jjonhwa | null | null | null | 0 | 14 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 357940449
num_examples: 10620
download_size: 185039420
dataset_size: 357940449
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikipedia_long"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Saail/satellite_ground | 2023-10-07T21:23:09.000Z | [
"region:us"
] | Saail | null | null | null | 0 | 14 | Entry not found |
dutch_social | 2023-01-25T14:29:36.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:multi-label-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"la... | null | The dataset contains around 271,342 tweets. The tweets are filtered via the official Twitter API to
contain tweets in Dutch language or by users who have specified their location information within Netherlands
geographical boundaries. Using natural language processing we have classified the tweets for their HISCO codes.
If the user has provided their location within Dutch boundaries, we have also classified them to their respective
provinces The objective of this dataset is to make research data available publicly in a FAIR (Findable, Accessible,
Interoperable, Reusable) way. Twitter's Terms of Service Licensed under Attribution-NonCommercial 4.0 International
(CC BY-NC 4.0) (2020-10-27) | @data{FK2/MTPTL7_2020,
author = {Gupta, Aakash},
publisher = {COVID-19 Data Hub},
title = {{Dutch social media collection}},
year = {2020},
version = {DRAFT VERSION},
doi = {10.5072/FK2/MTPTL7},
url = {https://doi.org/10.5072/FK2/MTPTL7}
} | null | 4 | 13 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
- nl
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- multi-label-classification
pretty_name: Dutch Social Media Collection
dataset_info:
features:
- name: full_text
dtype: string
- name: text_translation
dtype: string
- name: screen_name
dtype: string
- name: description
dtype: string
- name: desc_translation
dtype: string
- name: location
dtype: string
- name: weekofyear
dtype: int64
- name: weekday
dtype: int64
- name: month
dtype: int64
- name: year
dtype: int64
- name: day
dtype: int64
- name: point_info
dtype: string
- name: point
dtype: string
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: altitude
dtype: float64
- name: province
dtype: string
- name: hisco_standard
dtype: string
- name: hisco_code
dtype: string
- name: industry
dtype: bool_
- name: sentiment_pattern
dtype: float64
- name: subjective_pattern
dtype: float64
- name: label
dtype:
class_label:
names:
'0': neg
'1': neu
'2': pos
config_name: dutch_social
splits:
- name: train
num_bytes: 105569586
num_examples: 162805
- name: test
num_bytes: 35185351
num_examples: 54268
- name: validation
num_bytes: 34334756
num_examples: 54269
download_size: 68740666
dataset_size: 175089693
---
# Dataset Card for Dutch Social Media Collection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Dutch Social Media Collection](http://datasets.coronawhy.org/dataset.xhtml?persistentId=doi:10.5072/FK2/MTPTL7)
- **Repository:**
- **Paper:** *(in-progress)* https://doi.org/10.5072/FK2/MTPTL7
- **Leaderboard:**
- **Point of Contact:** [Aakash Gupta](mailto:aakashg80@gmail.com)
### Dataset Summary
The dataset contains 10 files with around 271,342 tweets. The tweets are filtered via the official Twitter API to contain tweets in Dutch language or by users who have specified their location information within Netherlands geographical boundaries. Using natural language processing we have classified the tweets for their HISCO codes. If the user has provided their location within Dutch boundaries, we have also classified them to their respective provinces The objective of this dataset is to make research data available publicly in a FAIR (Findable, Accessible, Interoperable, Reusable) way. Twitter's Terms of Service Licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (2020-10-27)
### Supported Tasks and Leaderboards
`sentiment analysis`, `multi-label classification`, `entity-extraction`
### Languages
The text is primarily in Dutch with some tweets in English and other languages. The BCP 47 code is `nl` and `en`
## Dataset Structure
### Data Instances
An example of the data field will be:
```
{
"full_text": "@pflegearzt @Friedelkorn @LAguja44 Pardon, wollte eigentlich das zitieren: \nhttps://t.co/ejO7bIMyj8\nMeine mentions sind inzw komplett undurchschaubar weil da Leute ihren supporterclub zwecks Likes zusammengerufen haben.",
"text_translation": "@pflegearzt @Friedelkorn @ LAguja44 Pardon wollte zitieren eigentlich das:\nhttps://t.co/ejO7bIMyj8\nMeine mentions inzw sind komplett undurchschaubar weil da Leute ihren supporter club Zwecks Likes zusammengerufen haben.",
"created_at": 1583756789000,
"screen_name": "TheoRettich",
"description": "I ❤️science, therefore a Commie. ☭ FALGSC: Part of a conspiracy which wants to achieve world domination. Tankie-Cornucopian. Ecology is a myth",
"desc_translation": "I ❤️science, Therefore a Commie. ☭ FALGSC: Part of a conspiracy How many followers wants to Achieve World Domination. Tankie-Cornucopian. Ecology is a myth",
"weekofyear": 11,
"weekday": 0,
"day": 9,
"month": 3,
"year": 2020,
"location": "Netherlands",
"point_info": "Nederland",
"point": "(52.5001698, 5.7480821, 0.0)",
"latitude": 52.5001698,
"longitude": 5.7480821,
"altitude": 0,
"province": "Flevoland",
"hisco_standard": null,
"hisco_code": null,
"industry": false,
"sentiment_pattern": 0,
"subjective_pattern": 0
}
```
### Data Fields
| Column Name | Description |
| --- | --- |
| full_text | Original text in the tweet |
| text_translation | English translation of the full text |
| created_at | Date of tweet creation |
| screen_name | username of the tweet author |
| description | description as provided in the users bio |
| desc_translation | English translation of user's bio/ description |
| location | Location information as provided in the user's bio |
| weekofyear | week of the year |
| weekday | Day of the week information; Monday=0....Sunday = 6|
| month | Month of tweet creation |
| year | year of tweet creation |
| day | day of tweet creation |
| point_info | point information from location columnd |
| point | tuple giving lat, lon & altitude information |
| latitude | geo-referencing information derived from location data |
| longitude | geo-referencing information derived from location data |
| altitude | geo-referencing information derived from location data|
| province | Province given location data of user |
| hisco_standard | HISCO standard key word; if available in tweet |
| hisco_code| HISCO standard code as derived from `hisco_standard`|
| industry | Whether the tweet talks about industry `(True/False)` |
| sentiment_score | Sentiment score -1.0 to 1.0 |
| subjectivity_score | Subjectivity scores 0 to 1 |
Missing values are replaced with empty strings or -1 (-100 for missing sentiment_score).
### Data Splits
Data has been split into Train: 60%, Validation: 20% and Test: 20%
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The tweets were hydrated using Twitter's API and then filtered for those which were in Dutch language and/or for users who had mentioned that they were from within Netherlands geographical borders.
#### Who are the source language producers?
The language producers are twitter users who have identified their location within the geographical boundaries of Netherland. Or those who have tweeted in the dutch language!
### Annotations
Using Natural language processing, we have classified the tweets on industry and for HSN HISCO codes.
Depending on the user's location, their provincial information is also added. Please check the file/column for detailed information.
The tweets are also classified on the sentiment & subjectivity scores.
Sentiment scores are between -1 to +1
Subjectivity scores are between 0 to 1
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
As of writing this data card no anonymization has been carried out on the tweets or user data. As such, if the twitter user has shared any personal & sensitive information, then it may be available in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[Aakash Gupta](mailto:aakashg80@gmail.com)
*Th!nkEvolve Consulting* and Researcher at CoronaWhy
### Licensing Information
CC BY-NC 4.0
### Citation Information
@data{FK2/MTPTL7_2020,
author = {Gupta, Aakash},
publisher = {COVID-19 Data Hub},
title = {{Dutch social media collection}},
year = {2020},
version = {DRAFT VERSION},
doi = {10.5072/FK2/MTPTL7},
url = {https://doi.org/10.5072/FK2/MTPTL7}
}
### Contributions
Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset. |
factckbr | 2023-01-25T14:30:15.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"license:mit",
"region:us"
] | null | A dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification.
The data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time.
The FACTCK.BR dataset contains 1309 claims with its corresponding label. | @inproceedings{10.1145/3323503.3361698,
author = {Moreno, Jo\\~{a}o and Bressan, Gra\\c{c}a},
title = {FACTCK.BR: A New Dataset to Study Fake News},
year = {2019},
isbn = {9781450367639},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3323503.3361698},
doi = {10.1145/3323503.3361698},
abstract = {Machine learning algorithms can be used to combat fake news propagation. For the news classification, labeled datasets are required, however, among the existing datasets, few separate verified false from skewed ones with a good variety of sources. This work presents FACTCK.BR, a new dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification. The data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time.},
booktitle = {Proceedings of the 25th Brazillian Symposium on Multimedia and the Web},
pages = {525–527},
numpages = {3},
keywords = {fake news, fact check, information extraction, dataset},
location = {Rio de Janeiro, Brazil},
series = {WebMedia '19}
} | null | 3 | 13 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
pretty_name: FACTCK BR
dataset_info:
features:
- name: url
dtype: string
- name: author
dtype: string
- name: date
dtype: string
- name: claim
dtype: string
- name: review
dtype: string
- name: title
dtype: string
- name: rating
dtype: float32
- name: best_rating
dtype: float32
- name: label
dtype:
class_label:
names:
'0': falso
'1': distorcido
'2': impreciso
'3': exagerado
'4': insustentável
'5': verdadeiro
'6': outros
'7': subestimado
'8': impossível provar
'9': discutível
'10': sem contexto
'11': de olho
'12': verdadeiro, mas
'13': ainda é cedo para dizer
splits:
- name: train
num_bytes: 750646
num_examples: 1313
download_size: 721314
dataset_size: 750646
---
# Dataset Card for FACTCK BR
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/jghm-f/FACTCK.BR
- **Repository:** https://github.com/jghm-f/FACTCK.BR
- **Paper:** https://dl.acm.org/doi/10.1145/3323503.3361698
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A dataset to study Fake News in Portuguese, presenting a supposedly false News along with their respective fact check and classification.
The data is collected from the ClaimReview, a structured data schema used by fact check agencies to share their results in search engines, enabling data collect in real time.
The FACTCK.BR dataset contains 1309 claims with its corresponding label.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset. |
allenai/peer_read | 2022-11-18T21:37:46.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"acceptability-classification",
"arxiv:1804.09635",
"region:us"
] | allenai | PearRead is a dataset of scientific peer reviews available to help researchers study this important artifact. The dataset consists of over 14K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR, as well as over 10K textual peer reviews written by experts for a subset of the papers. | @inproceedings{kang18naacl,
title = {A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications},
author = {Dongyeop Kang and Waleed Ammar and Bhavana Dalvi and Madeleine van Zuylen and Sebastian Kohlmeier and Eduard Hovy and Roy Schwartz},
booktitle = {Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL)},
address = {New Orleans, USA},
month = {June},
url = {https://arxiv.org/abs/1804.09635},
year = {2018}
} | null | 3 | 13 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: peerread
pretty_name: PeerRead
tags:
- acceptability-classification
dataset_info:
- config_name: parsed_pdfs
features:
- name: name
dtype: string
- name: metadata
struct:
- name: source
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: emails
sequence: string
- name: sections
sequence:
- name: heading
dtype: string
- name: text
dtype: string
- name: references
sequence:
- name: title
dtype: string
- name: author
sequence: string
- name: venue
dtype: string
- name: citeRegEx
dtype: string
- name: shortCiteRegEx
dtype: string
- name: year
dtype: int32
- name: referenceMentions
sequence:
- name: referenceID
dtype: int32
- name: context
dtype: string
- name: startOffset
dtype: int32
- name: endOffset
dtype: int32
- name: year
dtype: int32
- name: abstractText
dtype: string
- name: creator
dtype: string
splits:
- name: train
num_bytes: 571263679
num_examples: 11090
- name: test
num_bytes: 34284777
num_examples: 637
- name: validation
num_bytes: 32488519
num_examples: 637
download_size: 1246688292
dataset_size: 638036975
- config_name: reviews
features:
- name: id
dtype: string
- name: conference
dtype: string
- name: comments
dtype: string
- name: subjects
dtype: string
- name: version
dtype: string
- name: date_of_submission
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: accepted
dtype: bool
- name: abstract
dtype: string
- name: histories
sequence:
sequence: string
- name: reviews
sequence:
- name: date
dtype: string
- name: title
dtype: string
- name: other_keys
dtype: string
- name: originality
dtype: string
- name: comments
dtype: string
- name: is_meta_review
dtype: bool
- name: is_annotated
dtype: bool
- name: recommendation
dtype: string
- name: replicability
dtype: string
- name: presentation_format
dtype: string
- name: clarity
dtype: string
- name: meaningful_comparison
dtype: string
- name: substance
dtype: string
- name: reviewer_confidence
dtype: string
- name: soundness_correctness
dtype: string
- name: appropriateness
dtype: string
- name: impact
dtype: string
splits:
- name: train
num_bytes: 15234922
num_examples: 11090
- name: test
num_bytes: 878906
num_examples: 637
- name: validation
num_bytes: 864799
num_examples: 637
download_size: 1246688292
dataset_size: 16978627
---
# Dataset Card for peer_read
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://arxiv.org/abs/1804.09635
- **Repository:** https://github.com/allenai/PeerRead
- **Paper:** https://arxiv.org/pdf/1804.09635.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
PearRead is a dataset of scientific peer reviews available to help researchers study this important artifact. The dataset consists of over 14K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR, as well as over 10K textual peer reviews written by experts for a subset of the papers.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en-English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
#### parsed_pdfs
- `name`: `string` Filename in the dataset
- `metadata`: `dict` Paper metadata
- `source`: `string` Paper source
- `authors`: `list<string>` List of paper authors
- `title`: `string` Paper title
- `sections`: `list<dict>` List of section heading and corresponding description
- `heading`: `string` Section heading
- `text`: `string` Section description
- `references`: `string` List of references
- `title`: `string` Title of reference paper
- `author`: `list<string>` List of reference paper authors
- `venue`: `string` Reference venue
- `citeRegEx`: `string` Reference citeRegEx
- `shortCiteRegEx`: `string` Reference shortCiteRegEx
- `year`: `int` Reference publish year
- `referenceMentions`: `list<string>` List of reference mentions
- `referenceID`: `int` Reference mention ID
- `context`: `string` Reference mention context
- `startOffset`: `int` Reference startOffset
- `endOffset`: `int` Reference endOffset
- `year`: `int` Paper publish year
- `abstractText`: `string` Paper abstract
- `creator`: `string` Paper creator
#### reviews
- `id`: `int` Review ID
- `conference`: `string` Conference name
- `comments`: `string` Review comments
- `subjects`: `string` Review subjects
- `version`: `string` Review version
- `date_of_submission`: `string` Submission date
- `title`: `string` Paper title
- `authors`: `list<string>` List of paper authors
- `accepted`: `bool` Paper accepted flag
- `abstract`: `string` Paper abstract
- `histories`: `list<string>` Paper details with link
- `reviews`: `dict` Paper reviews
- `date`: `string` Date of review
- `title`: `string` Paper title
- `other_keys`: `string` Reviewer other details
- `originality`: `string` Originality score
- `comments`: `string` Reviewer comments
- `is_meta_review`: `bool` Review type flag
- `recommendation`: `string` Reviewer recommendation
- `replicability`: `string` Replicability score
- `presentation_format`: `string` Presentation type
- `clarity`: `string` Clarity score
- `meaningful_comparison`: `string` Meaningful comparison score
- `substance`: `string` Substance score
- `reviewer_confidence`: `string` Reviewer confidence score
- `soundness_correctness`: `string` Soundness correctness score
- `appropriateness`: `string` Appropriateness score
- `impact`: `string` Impact score
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Dongyeop Kang, Waleed Ammar, Bhavana Dalvi Mishra, Madeleine van Zuylen, Sebastian Kohlmeier, Eduard Hovy, Roy Schwartz
### Licensing Information
[More Information Needed]
### Citation Information
@inproceedings{kang18naacl,
title = {A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications},
author = {Dongyeop Kang and Waleed Ammar and Bhavana Dalvi and Madeleine van Zuylen and Sebastian Kohlmeier and Eduard Hovy and Roy Schwartz},
booktitle = {Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL)},
address = {New Orleans, USA},
month = {June},
url = {https://arxiv.org/abs/1804.09635},
year = {2018}
}
### Contributions
Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset. |
roman_urdu | 2023-01-25T14:43:17.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ur",
"license:unknown",
"region:us"
] | null | This is an extensive compilation of Roman Urdu Dataset (Urdu written in Latin/Roman script) tagged for sentiment analysis. | @InProceedings{Sharf:2018,
title = "Performing Natural Language Processing on Roman Urdu Datasets",
authors = "Zareen Sharf and Saif Ur Rahman",
booktitle = "International Journal of Computer Science and Network Security",
volume = "18",
number = "1",
pages = "141-148",
year = "2018"
}
@misc{Dua:2019,
author = "Dua, Dheeru and Graff, Casey",
year = "2017",
title = "{UCI} Machine Learning Repository",
url = "http://archive.ics.uci.edu/ml",
institution = "University of California, Irvine, School of Information and Computer Sciences"
} | null | 1 | 13 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ur
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: roman-urdu-data-set
pretty_name: Roman Urdu Dataset
dataset_info:
features:
- name: sentence
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': Positive
'1': Negative
'2': Neutral
splits:
- name: train
num_bytes: 1633423
num_examples: 20229
download_size: 1628349
dataset_size: 1633423
---
# Dataset Card for Roman Urdu Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set)
- **Point of Contact:** [Zareen Sharf](mailto:zareensharf76@gmail.com)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Urdu
## Dataset Structure
[More Information Needed]
### Data Instances
```
Wah je wah,Positive,
```
### Data Fields
Each row consists of a short Urdu text, followed by a sentiment label. The labels are one of `Positive`, `Negative`, and `Neutral`. Note that the original source file is a comma-separated values file.
* `sentence`: A short Urdu text
* `label`: One of `Positive`, `Negative`, and `Neutral`, indicating the polarity of the sentiment expressed in the sentence
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{Sharf:2018,
title = "Performing Natural Language Processing on Roman Urdu Datasets",
authors = "Zareen Sharf and Saif Ur Rahman",
booktitle = "International Journal of Computer Science and Network Security",
volume = "18",
number = "1",
pages = "141-148",
year = "2018"
}
@misc{Dua:2019,
author = "Dua, Dheeru and Graff, Casey",
year = "2017",
title = "{UCI} Machine Learning Repository",
url = "http://archive.ics.uci.edu/ml",
institution = "University of California, Irvine, School of Information and Computer Sciences"
}
```
### Contributions
Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset. |
swedish_ner_corpus | 2023-01-25T14:45:21.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:sv",
"license:cc-by-4.0",
"region:us"
] | null | Webbnyheter 2012 from Spraakbanken, semi-manually annotated and adapted for CoreNLP Swedish NER. Semi-manually defined in this case as: Bootstrapped from Swedish Gazetters then manually correcte/reviewed by two independent native speaking swedish annotators. No annotator agreement calculated. | null | null | 1 | 13 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- sv
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Swedish NER Corpus
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': '0'
'1': LOC
'2': MISC
'3': ORG
'4': PER
splits:
- name: train
num_bytes: 2032630
num_examples: 6886
- name: test
num_bytes: 755234
num_examples: 2453
download_size: 1384558
dataset_size: 2787864
---
# Dataset Card for Swedish NER Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/klintan/swedish-ner-corpus]()
- **Repository:** [https://github.com/klintan/swedish-ner-corpus]()
- **Point of contact:** [Andreas Klintberg](ankl@kth.se)
### Dataset Summary
Webbnyheter 2012 from Spraakbanken, semi-manually annotated and adapted for CoreNLP Swedish NER. Semi-manually defined in this case as: Bootstrapped from Swedish Gazetters then manually correcte/reviewed by two independent native speaking swedish annotators. No annotator agreement calculated.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Swedish
## Dataset Structure
### Data Instances
A sample dataset instance is provided below:
```json
{'id': '3',
'ner_tags': [4, 4, 0, 0, 0, 0, 0, 0, 3, 3, 0],
'tokens': ['Margaretha',
'Fahlgren',
',',
'professor',
'i',
'litteraturvetenskap',
',',
'vice-rektor',
'Uppsala',
'universitet',
'.']}
```
### Data Fields
- `id`: id of the sentence
- `token`: current token
- `ner_tag`: ner tag of the token
Full fields:
```json
{
"id":{
"feature_type":"Value"
"dtype":"string"
}
"tokens":{
"feature_type":"Sequence"
"feature":{
"feature_type":"Value"
"dtype":"string"
}
}
"ner_tags":{
"feature_type":"Sequence"
"dtype":"int32"
"feature":{
"feature_type":"ClassLabel"
"dtype":"int32"
"class_names":[
0:"0"
1:"LOC"
2:"MISC"
3:"ORG"
4:"PER"
]
}
}
}
```
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The original dataset was provided by Språkbanken which consists of news from Swedish newspapers' websites.
### Licensing Information
https://github.com/klintan/swedish-ner-corpus/blob/master/LICENSE
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
yoruba_bbc_topics | 2023-01-25T15:03:35.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:yo",
"license:unknown",
"region:us"
] | null | A collection of news article headlines in Yoruba from BBC Yoruba.
Each headline is labeled with one of the following classes: africa,
entertainment, health, nigeria, politics, sport or world.
The dataset was presented in the paper:
Hedderich, Adelani, Zhu, Alabi, Markus, Klakow: Transfer Learning and
Distant Supervision for Multilingual Transformer Models: A Study on
African Languages (EMNLP 2020). | @inproceedings{hedderich-etal-2020-transfer,
title = "Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages",
author = "Hedderich, Michael A. and
Adelani, David and
Zhu, Dawei and
Alabi, Jesujoba and
Markus, Udia and
Klakow, Dietrich",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.204",
doi = "10.18653/v1/2020.emnlp-main.204",
} | null | 0 | 13 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- yo
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: Yoruba Bbc News Topic Classification Dataset (YorubaBbcTopics)
dataset_info:
features:
- name: news_title
dtype: string
- name: label
dtype:
class_label:
names:
'0': africa
'1': entertainment
'2': health
'3': nigeria
'4': politics
'5': sport
'6': world
- name: date
dtype: string
- name: bbc_url_id
dtype: string
splits:
- name: train
num_bytes: 197117
num_examples: 1340
- name: validation
num_bytes: 27771
num_examples: 189
- name: test
num_bytes: 55652
num_examples: 379
download_size: 265480
dataset_size: 280540
---
# Dataset Card for Yoruba BBC News Topic Classification dataset (yoruba_bbc_topics)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** -
- **Repository:** https://github.com/uds-lsv/transfer-distant-transformer-african
- **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.204/
- **Leaderboard:** -
- **Point of Contact:** Michael A. Hedderich and David Adelani
{mhedderich, didelani} (at) lsv.uni-saarland.de
### Dataset Summary
A news headline topic classification dataset, similar to AG-news, for Yorùbá. The news headlines were collected from [BBC Yoruba](https://www.bbc.com/yoruba).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Yorùbá (ISO 639-1: yo)
## Dataset Structure
### Data Instances
An instance consists of a news title sentence and the corresponding topic label as well as publishing information (date and website id).
### Data Fields
- `news_title`: A news title.
- `label`: The label describing the topic of the news title. Can be one of the following classes: africa, entertainment, health, nigeria, politics, sport or world.
- `date`: The publication date (in Yorùbá).
- `bbc_url_id`: The identifier of the article in the BBC URL.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset. |
NYTK/HuCoPA | 2023-03-27T09:54:02.000Z | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|other",
"language:hu",
"license:bsd-2-clause",
"commonsense-reasoning",
"region:us"
] | NYTK | null | null | null | 0 | 13 | ---
annotations_creators:
- found
language_creators:
- found
- expert-generated
language:
- hu
license:
- bsd-2-clause
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- other
task_ids: []
pretty_name: HuCoPA
tags:
- commonsense-reasoning
---
# Dataset Card for HuCoPA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuCoPA dataset](https://github.com/nytud/HuCoPA)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:ligeti-nagy.noemi@nytud.hu)
### Dataset Summary
This is the dataset card for the Hungarian Choice of Plausible Alternatives Corpus (HuCoPA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and re-annotating the original English CoPA corpus (Roemmele et al., 2011).
### Supported Tasks and Leaderboards
'commonsense reasoning'
'question answering'
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an id, a premise, a question ('cause' or 'effect'), two alternatives and a label (1 or 2).
An example:
```
{"idx": "1",
"question": "cause",
"label": "1",
"premise": "A testem árnyékot vetett a fűre.",
"choice1": "Felkelt a nap.",
"choice2": "A füvet lenyírták."}
```
### Data Fields
- id: unique id of the instances, an integer between 1 and 1000;
- question: "cause" or "effect". It suggests what kind of causal relation are we looking for: in the case of "cause" we search for the more plausible alternative that may be a cause of the premise. In the case of "effect" we are looking for a plausible result of the premise;
- premise: the premise, a sentence;
- choice1: the first alternative, a sentence;
- choice2: the second alternative, a sentence;
- label: the number of the more plausible alternative (1 or 2).
### Data Splits
HuCoPA has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of instances in the split |
|---------------|----------------------------------|
| train | 400 |
| validation | 100 |
| test | 500 |
The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:ligeti-nagy.noemi@nytud.hu), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the content of the CoPA corpus. Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.
### Annotations
#### Annotation process
The instances initially inherited their original labels from the CoPA dataset. Each instance was annotated by a human annotator. If the original label and the human annotator's label did not match, we manually curated the instance and assigned a final label to that. This step was necessary to ensure that the causal realationship had not been changed or lost during the translation process.
#### Who are the annotators?
The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.
## Additional Information
The human performance on the test set is 96% (accuracy).
### Licensing Information
HuCoPA is released under the BSD 2-Clause License.
Copyright (c) 2010, University of Southern California
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022},
editors = {Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika},
address = {Szeged},
publisher = {JATEPress},
pages = {431–446}
}
```
and to:
Roemmele, M., Bejan, C., and Gordon, A. (2011) Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning. AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning, Stanford University, March 21-23, 2011.
```
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
booktitle={2011 AAAI Spring Symposium Series},
year={2011},
url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF},
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset.
|
ai4bharat/samanantar | 2022-12-07T15:33:46.000Z | [
"task_categories:text-generation",
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"language:as",
"language:bn",
"language:gu",
"language:hi",
... | ai4bharat | Samanantar is the largest publicly available parallel corpora collection for Indic languages: Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu. The corpus has 49.6M sentence pairs between English to Indian Languages. | @misc{ramesh2021samanantar,
title={Samanantar: The Largest Publicly Available Parallel Corpora Collection for 11 Indic Languages},
author={Gowtham Ramesh and Sumanth Doddapaneni and Aravinth Bheemaraj and Mayank Jobanputra and Raghavan AK and Ajitesh Sharma and Sujit Sahoo and Harshita Diddee and Mahalakshmi J and Divyanshu Kakwani and Navneet Kumar and Aswin Pradeep and Srihari Nagaraj and Kumar Deepak and Vivek Raghavan and Anoop Kunchukuttan and Pratyush Kumar and Mitesh Shantadevi Khapra},
year={2021},
eprint={2104.05596},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 12 | 13 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
- translation
task_ids: []
pretty_name: Samanantar
tags:
- conditional-text-generation
---
# Dataset Card for Samanantar
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/samanantar/
- **Repository:**
- **Paper:** [Samanantar: The Largest Publicly Available Parallel Corpora Collection for 11 Indic Languages](https://arxiv.org/abs/2104.05596)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Samanantar is the largest publicly available parallel corpora collection for Indic language: Assamese, Bengali,
Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu.
The corpus has 49.6M sentence pairs between English to Indian Languages.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Samanantar contains parallel sentences between English (`en`) and 11 Indic language:
- Assamese (`as`),
- Bengali (`bn`),
- Gujarati (`gu`),
- Hindi (`hi`),
- Kannada (`kn`),
- Malayalam (`ml`),
- Marathi (`mr`),
- Odia (`or`),
- Punjabi (`pa`),
- Tamil (`ta`) and
- Telugu (`te`).
## Dataset Structure
### Data Instances
```
{
'idx': 0,
'src': 'Prime Minister Narendra Modi met Her Majesty Queen Maxima of the Kingdom of the Netherlands today.',
'tgt': 'নতুন দিল্লিতে সোমবার প্রধানমন্ত্রী শ্রী নরেন্দ্র মোদীর সঙ্গে নেদারন্যান্ডসের মহারানী ম্যাক্সিমা সাক্ষাৎ করেন।',
'data_source': 'pmi'
}
```
### Data Fields
- `idx` (int): ID.
- `src` (string): Sentence in source language (English).
- `tgt` (string): Sentence in destination language (one of the 11 Indic languages).
- `data_source` (string): Source of the data.
For created data sources, depending on the destination language, it might be one of:
- anuvaad_catchnews
- anuvaad_DD_National
- anuvaad_DD_sports
- anuvaad_drivespark
- anuvaad_dw
- anuvaad_financialexpress
- anuvaad-general_corpus
- anuvaad_goodreturns
- anuvaad_indianexpress
- anuvaad_mykhel
- anuvaad_nativeplanet
- anuvaad_newsonair
- anuvaad_nouns_dictionary
- anuvaad_ocr
- anuvaad_oneindia
- anuvaad_pib
- anuvaad_pib_archives
- anuvaad_prothomalo
- anuvaad_timesofindia
- asianetnews
- betterindia
- bridge
- business_standard
- catchnews
- coursera
- dd_national
- dd_sports
- dwnews
- drivespark
- fin_express
- goodreturns
- gu_govt
- jagran-business
- jagran-education
- jagran-sports
- ie_business
- ie_education
- ie_entertainment
- ie_general
- ie_lifestyle
- ie_news
- ie_sports
- ie_tech
- indiccorp
- jagran-entertainment
- jagran-lifestyle
- jagran-news
- jagran-tech
- khan_academy
- Kurzgesagt
- marketfeed
- mykhel
- nativeplanet
- nptel
- ocr
- oneindia
- pa_govt
- pmi
- pranabmukherjee
- sakshi
- sentinel
- thewire
- toi
- tribune
- vsauce
- wikipedia
- zeebiz
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Information
```
@misc{ramesh2021samanantar,
title={Samanantar: The Largest Publicly Available Parallel Corpora Collection for 11 Indic Languages},
author={Gowtham Ramesh and Sumanth Doddapaneni and Aravinth Bheemaraj and Mayank Jobanputra and Raghavan AK and Ajitesh Sharma and Sujit Sahoo and Harshita Diddee and Mahalakshmi J and Divyanshu Kakwani and Navneet Kumar and Aswin Pradeep and Srihari Nagaraj and Kumar Deepak and Vivek Raghavan and Anoop Kunchukuttan and Pratyush Kumar and Mitesh Shantadevi Khapra},
year={2021},
eprint={2104.05596},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
albertvillanova/legal_contracts | 2021-12-10T18:03:23.000Z | [
"region:us"
] | albertvillanova | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 15 | 13 | Entry not found |
gsarti/wmt_vat | 2022-10-27T08:37:41.000Z | [
"task_categories:text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|wmt16",
"source_datasets:extended|wmt17",
"sourc... | gsarti | The Variance-Aware Machine Translation corpus contains 70 small and discriminative test sets for machine translation (MT)
evaluation called variance-aware test sets (VAT), covering 35 translation directions from WMT16 to WMT20 competitions.
VAT is automatically created by a novel variance-aware filtering method that filters the indiscriminative test instances
of the current MT benchmark without any human labor. Experimental results show that VAT outperforms the original WMT benchmark
in terms of the correlation with human judgment across mainstream language pairs and test sets. Further analysis on the properties
of VAT reveals the challenging linguistic features (e.g., translation of low-frequency words and proper nouns) for the competitive
MT systems, providing guidance for constructing future MT test sets. | @inproceedings{
zhan2021varianceaware,
title={Variance-Aware Machine Translation Test Sets},
author={Runzhe Zhan and Xuebo Liu and Derek F. Wong and Lidia S. Chao},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems, Datasets and Benchmarks Track},
year={2021},
url={https://openreview.net/forum?id=hhKA5k0oVy5}
} | null | 7 | 13 | ---
annotations_creators:
- found
language_creators:
- expert-generated
language:
- cs
- de
- en
- et
- fi
- fr
- gu
- iu
- ja
- kk
- km
- lt
- lv
- pl
- ps
- ro
- ru
- ta
- tr
- zh
license:
- unknown
multilinguality:
- multilingual
- translation
size_categories:
- unknown
source_datasets:
- extended|wmt16
- extended|wmt17
- extended|wmt18
- extended|wmt19
- extended|wmt20
task_categories:
- text-generation
- translation
task_ids: []
pretty_name: wmt_vat
tags:
- conditional-text-generation
---
# Dataset Card for Variance-Aware MT Test Sets
## Table of Contents
- [Dataset Card for Variance-Aware MT Test Sets](#dataset-card-for-variance-aware-mt-test-sets)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Machine Translation](#machine-translation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Github](https://github.com/NLP2CT/Variance-Aware-MT-Test-Sets)
- **Paper:** [NeurIPS](https://openreview.net/forum?id=hhKA5k0oVy5)
- **Point of Contact:** [Runzhe Zhan](mailto:nlp2ct.runzhe@gmail.com)
### Dataset Summary
This dataset comprises 70 small and discriminative test sets for machine translation (MT) evaluation called variance-aware test sets (VAT), covering 35 translation directions from WMT16 to WMT20 competitions. VAT is automatically created by a novel variance-aware filtering method that filters the indiscriminative test instances of the current MT benchmark without any human labor. Experimental results show that VAT outperforms the original WMT benchmark in terms of the correlation with human judgment across mainstream language pairs and test sets. Further analysis on the properties of VAT reveals the challenging linguistic features (e.g., translation of low-frequency words and proper nouns) for the competitive MT systems, providing guidance for constructing future MT test sets.
**Disclaimer**: *The VAT test sets are hosted through Github by the [Natural Language Processing & Portuguese-Chinese Machine Translation Laboratory (NLP2CT Lab)](http://nlp2ct.cis.um.edu.mo/) of the University of Macau. They were introduced by the paper [Variance-Aware Machine Translation Test Sets](https://openreview.net/forum?id=hhKA5k0oVy5) by [Runzhe Zhan](https://runzhe.me/), [Xuebo Liu](https://sunbowliu.github.io/), [Derek F. Wong](https://www.fst.um.edu.mo/personal/derek-wong/), [Lidia S. Chao](https://aclanthology.org/people/l/lidia-s-chao/) and follow the original licensing for WMT test sets.
### Supported Tasks and Leaderboards
#### Machine Translation
Refer to the [original paper](https://openreview.net/forum?id=hhKA5k0oVy5) for additional details on model evaluation on VAT.
### Languages
The following table taken from the original paper lists the languages supported by the VAT test sets, for a total of 70 language pairs:
| ↔️ | `wmt16` | `wmt17` | `wmt18` | `wmt19` | `wmt20` |
|----------:|:--------|:--------|:--------|--------:|--------:|
| `xx_en` | `cs`,`de`,`fi`, <br /> `ro`,`ru`,`tr` | `cs`,`de`,`fi`,`lv`, <br /> `ru`,`tr`,`zh` | `cs`,`de`,`et`,`fi`, <br /> `ru`,`tr`,`zh` | `de`,`fi`,`gu`, <br /> `kk`,`lt`,`ru`,`zh` | `cs`,`de`,`iu`,`ja`,`km`, <br /> `pl`,`ps`,`ru`,`ta`,`zh`|
| `en_xx` | `ru` | `cs`,`de`,`fi`, <br /> `lv`,`ru`,`tr`,`zh` | `cs`,`de`,`et`,`fi`, <br /> `ru`,`tr`,`zh` | `cs`,`de`,`fi`,`gu`, <br /> `kk`,`lt`,`ru`,`zh` | `cs`,`de`,`ja`,`pl`, <br /> `ru`,`ta`,`zh`|
| `xx_yy` | / | / | / | `de_cs`,`de_fr`, <br /> `fr_de` | / |
To use any one of the test set, pass `wmtXX_src_tgt` as configuration name to the `load_dataset` command. E.g. to load the English-Russian test set from `wmt16`, use `load_dataset('gsarti/wmt_vat', 'wmt16_en_ru')`.
## Dataset Structure
### Data Instances
A sample from the `test` split (the only available split) for the WMT16 English-Russian language (`wmt16_en_ru` config) is provided below. All configurations have the same structure.
```python
{
'orig_id': 0,
'source': 'The social card of residents of Ivanovo region is to be recognised as an electronic payment instrument.',
'reference': 'Социальная карта жителя Ивановской области признается электронным средством платежа.'
}
```
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
- `orig_id`: Id corresponding to the row id in the original dataset, before variance-aware filtering.
- `source`: The source sentence.
- `reference`: The reference sentence in the target language.
### Data Splits
Taken from the original repository:
| Configuration | # Sentences | # Words | # Vocabulary |
| :-----------: | :--------: | :-----: | :--------------: |
| `wmt20_km_en` | 928 | 17170 | 3645 |
| `wmt20_cs_en` | 266 | 12568 | 3502 |
| `wmt20_en_de` | 567 | 21336 | 5945 |
| `wmt20_ja_en` | 397 | 10526 | 3063 |
| `wmt20_ps_en` | 1088 | 20296 | 4303 |
| `wmt20_en_zh` | 567 | 18224 | 5019 |
| `wmt20_en_ta` | 400 | 7809 | 4028 |
| `wmt20_de_en` | 314 | 16083 | 4046 |
| `wmt20_zh_en` | 800 | 35132 | 6457 |
| `wmt20_en_ja` | 400 | 12718 | 2969 |
| `wmt20_en_cs` | 567 | 16579 | 6391 |
| `wmt20_en_pl` | 400 | 8423 | 3834 |
| `wmt20_en_ru` | 801 | 17446 | 6877 |
| `wmt20_pl_en` | 400 | 7394 | 2399 |
| `wmt20_iu_en` | 1188 | 23494 | 3876 |
| `wmt20_ru_en` | 396 | 6966 | 2330 |
| `wmt20_ta_en` | 399 | 7427 | 2148 |
| `wmt19_zh_en` | 800 | 36739 | 6168 |
| `wmt19_en_cs` | 799 | 15433 | 6111 |
| `wmt19_de_en` | 800 | 15219 | 4222 |
| `wmt19_en_gu` | 399 | 8494 | 3548 |
| `wmt19_fr_de` | 680 | 12616 | 3698 |
| `wmt19_en_zh` | 799 | 20230 | 5547 |
| `wmt19_fi_en` | 798 | 13759 | 3555 |
| `wmt19_en_fi` | 799 | 13303 | 6149 |
| `wmt19_kk_en` | 400 | 9283 | 2584 |
| `wmt19_de_cs` | 799 | 15080 | 6166 |
| `wmt19_lt_en` | 400 | 10474 | 2874 |
| `wmt19_en_lt` | 399 | 7251 | 3364 |
| `wmt19_ru_en` | 800 | 14693 | 3817 |
| `wmt19_en_kk` | 399 | 6411 | 3252 |
| `wmt19_en_ru` | 799 | 16393 | 6125 |
| `wmt19_gu_en` | 406 | 8061 | 2434 |
| `wmt19_de_fr` | 680 | 16181 | 3517 |
| `wmt19_en_de` | 799 | 18946 | 5340 |
| `wmt18_en_cs` | 1193 | 19552 | 7926 |
| `wmt18_cs_en` | 1193 | 23439 | 5453 |
| `wmt18_en_fi` | 1200 | 16239 | 7696 |
| `wmt18_en_tr` | 1200 | 19621 | 8613 |
| `wmt18_en_et` | 800 | 13034 | 6001 |
| `wmt18_ru_en` | 1200 | 26747 | 6045 |
| `wmt18_et_en` | 800 | 20045 | 5045 |
| `wmt18_tr_en` | 1200 | 25689 | 5955 |
| `wmt18_fi_en` | 1200 | 24912 | 5834 |
| `wmt18_zh_en` | 1592 | 42983 | 7985 |
| `wmt18_en_zh` | 1592 | 34796 | 8579 |
| `wmt18_en_ru` | 1200 | 22830 | 8679 |
| `wmt18_de_en` | 1199 | 28275 | 6487 |
| `wmt18_en_de` | 1199 | 25473 | 7130 |
| `wmt17_en_lv` | 800 | 14453 | 6161 |
| `wmt17_zh_en` | 800 | 20590 | 5149 |
| `wmt17_en_tr` | 1203 | 17612 | 7714 |
| `wmt17_lv_en` | 800 | 18653 | 4747 |
| `wmt17_en_de` | 1202 | 22055 | 6463 |
| `wmt17_ru_en` | 1200 | 24807 | 5790 |
| `wmt17_en_fi` | 1201 | 17284 | 7763 |
| `wmt17_tr_en` | 1203 | 23037 | 5387 |
| `wmt17_en_zh` | 800 | 18001 | 5629 |
| `wmt17_en_ru` | 1200 | 22251 | 8761 |
| `wmt17_fi_en` | 1201 | 23791 | 5300 |
| `wmt17_en_cs` | 1202 | 21278 | 8256 |
| `wmt17_de_en` | 1202 | 23838 | 5487 |
| `wmt17_cs_en` | 1202 | 22707 | 5310 |
| `wmt16_tr_en` | 1200 | 19225 | 4823 |
| `wmt16_ru_en` | 1199 | 23010 | 5442 |
| `wmt16_ro_en` | 800 | 16200 | 3968 |
| `wmt16_de_en` | 1200 | 22612 | 5511 |
| `wmt16_en_ru` | 1199 | 20233 | 7872 |
| `wmt16_fi_en` | 1200 | 20744 | 5176 |
| `wmt16_cs_en` | 1200 | 23235 | 5324 |
### Dataset Creation
The dataset was created by retaining a subset of the top 40% instances from various WMT test sets for which the variance between automatic scores (BLEU, BLEURT, COMET, BERTScore) was the highest. Please refer to the original article [Variance-Aware Machine Translation Test Sets](https://openreview.net/forum?id=hhKA5k0oVy5) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of VAT are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
The variance-aware test set were created based on the original WMT test set. Thus, the the [original data licensing plan](http://www.statmt.org/wmt20/translation-task.html) already stated by WMT organizers is still applicable:
> The data released for the WMT news translation task can be freely used for research purposes, we just ask that you cite the WMT shared task overview paper, and respect any additional citation requirements on the individual data sets. For other uses of the data, you should consult with original owners of the data sets.
### Citation Information
Please cite the authors if you use these corpora in your work. It is also advised to cite the original WMT shared task paper for the specific test sets that were used.
```bibtex
@inproceedings{
zhan2021varianceaware,
title={Variance-Aware Machine Translation Test Sets},
author={Runzhe Zhan and Xuebo Liu and Derek F. Wong and Lidia S. Chao},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems, Datasets and Benchmarks Track},
year={2021},
url={https://openreview.net/forum?id=hhKA5k0oVy5}
}
``` |
hf-test/sv_corpora_parliament_processed | 2022-01-10T10:17:51.000Z | [
"region:us"
] | hf-test | null | null | null | 1 | 13 | Swedish text corpus created by extracting the `"text"` from `dataset = load_dataset("europarl_bilingual", lang1="en", lang2="sv", split="train")` and processing it with:
```python
import re
def extract_text(batch):
text = batch["translation"]["sv"]
batch["text"] = re.sub(chars_to_ignore_regex, "", text.lower())
return batch
``` |
huggingartists/ed-sheeran | 2022-10-25T09:28:28.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | null | 0 | 13 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/ed-sheeran"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 3.432643 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b501daeff73d1b17610f47a5668f690a.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/ed-sheeran">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ed Sheeran</div>
<a href="https://genius.com/artists/ed-sheeran">
<div style="text-align: center; font-size: 14px;">@ed-sheeran</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/ed-sheeran).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/ed-sheeran")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|923| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/ed-sheeran")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
shibing624/source_code | 2022-10-30T06:30:07.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100M<n<200M",
"source_datasets:https://github.com/shibing624/code-autocomplete",
"source_datasets:https://github.com/... | shibing624 | 纯文本数据,内容:高质量编程源代码,包括Python,Java,CPP源代码 | null | null | 4 | 13 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
- gfdl
multilinguality:
- monolingual
size_categories:
- 100M<n<200M
source_datasets:
- https://github.com/shibing624/code-autocomplete
- https://github.com/bharathgs/Awesome-pytorch-list
- https://github.com/akullpp/awesome-java
- https://github.com/fffaraz/awesome-cpp
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for "SourceCode"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [code-autocomplete](https://github.com/shibing624/code-autocomplete)
- **Leaderboard:** [leaderboard](https://github.com/shibing624/code-autocomplete) (located on the homepage)
- **Size of downloaded dataset files:** 105 MB
- **Total amount of disk used:** 570 MB
### Dataset Summary
Source code dataset is a collection of Github awesome repos, it contains Python, Java, C++, and other programming languages.
This dataset can be used in different NLP tasks like language modeling and text generation tasks.
data source:
- PYTHON_CODE: https://github.com/bharathgs/Awesome-pytorch-list
- JAVA_CODE: https://github.com/akullpp/awesome-java
- CPP_CODE: https://github.com/fffaraz/awesome-cpp
### Supported Tasks and Leaderboards
- language modeling
- code generation tasks, **Leaderboard:** [code-autocomplete](https://github.com/shibing624/code-autocomplete)
### Languages
- programming languages: Python, Java, C++
- natural language: English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": """
import json
import argparse
def _parse_args():
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawTextHelpFormatter,
)
parser.add_argument(
'--model-file',
required=True,
help=(
'A pt file from '
'https://github.com/pytorch/fairseq/tree/main/examples/hubert'
)
)
return parser.parse_args()
"""
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
#### python
```shell
$ wc -l python/*
10000 python/test.txt
5215412 python/train.txt
10000 python/valid.txt
5235412 total
```
#### java
```shell
$ wc -l java/*
950083 java/test.txt
2802880 java/train.txt
940803 java/valid.txt
4693766 total
```
#### cpp
```shell
$ wc -l cpp/*
1060014 cpp/test.txt
3119241 cpp/train.txt
1099124 cpp/valid.txt
5278379 total
```
## Dataset Creation
### Curation Rationale
As code generation dataset, I upload it to huggingface datasets.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Citation:
APA:
```latex
Xu, M. code-autocomplete: Code AutoComplete with GPT2 model (Version 0.0.4) [Computer software]. https://github.com/shibing624/code-autocomplete
```
BibTeX:
```latex
@software{Xu_code-autocomplete_Code_AutoComplete,
author = {Xu, Ming},
title = {code-autocomplete: Code AutoComplete with GPT2 model},
url = {https://github.com/shibing624/code-autocomplete},
version = {0.0.4}
}
```
### Annotations
#### Annotation process
#### Who are the annotators?
nobody
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating code generation model.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Github awesome programing code repos.
### Licensing Information
GNU Free Documentation License v1.3 or later.
For research use only.
### Contributions
Thanks to [@shibing624](https://github.com/shibing624) add this dataset.
|
wanyu/IteraTeR_human_sent | 2022-10-24T18:58:22.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | wanyu | null | null | null | 0 | 13 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: IteraTeR_human_sent
language_bcp47:
- en-US
tags:
- conditional-text-generation
- text-editing
---
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
|
huggan/AFHQv2 | 2022-03-25T07:35:41.000Z | [
"region:us"
] | huggan | null | null | null | 0 | 13 | Entry not found |
huggan/few-shot-obama | 2022-04-12T14:05:43.000Z | [
"arxiv:2101.04775",
"region:us"
] | huggan | null | null | null | 0 | 13 | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
strombergnlp/itu_faroese_danish | 2022-07-01T15:43:48.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:da",
"language:fo",
"license:cc-by-4.0",
"arxiv:2206.08727",
"doi:10.57967/hf/0515",
"region:us"
... | strombergnlp | \ | \ | null | 3 | 13 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
- fo
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: ITU Faroese Danish parallel text
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [https://arxiv.org/abs/2206.08727](https://arxiv.org/abs/2206.08727)
- **Leaderboard:**
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is a native-speaker-generated parallel corpus of Faroese and Danish
### Supported Tasks and Leaderboards
*
### Languages
* Danish
* Faroese
## Dataset Structure
### Data Instances
3995 parallel sentences
### Data Fields
* `id`: the sentence pair ID, `string`
* `origin`: the original sentence identifier text, `string`
* `fo`: the Faroese text, `string`
* `da`: the Danish text, `string`
### Data Splits
Monolithic
## Dataset Creation
### Curation Rationale
To gather a broad range of topics about the Faroes and the rest of the world, to enable a general-purpose Faroese:Danish translation system
### Source Data
#### Initial Data Collection and Normalization
* EUROparl Danish
* Dimmaletting, Faroese newspaper
* Tatoeba Danish / Faroese
#### Who are the source language producers?
### Annotations
#### Annotation process
No annotations
#### Who are the annotators?
Two Faroese native speakers, one male one female, in their 20s, masters degrees, living in Denmark
### Personal and Sensitive Information
None due to the sources used
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This collection of Faroese is curated by Leon Derczynski
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
```
``` |
bigscience-data/roots_id_indo4b_jw300 | 2022-12-12T11:05:13.000Z | [
"language:id",
"license:cc-by-nc-sa-4.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 13 | ---
language: id
license: cc-by-nc-sa-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_id_indo4b_jw300
# indo4b_jw300
- Dataset uid: `indo4b_jw300`
### Description
Indo4B consists of around 4B words, with around 250M sentences. The dataset covers both formal and colloquial Indonesian sentences compiled from 12 corpus, of which two corpus cover Indonesian colloquial language, eight corpus cover formal Indonesian language, and the rest have a mixed style, both colloquial and formal.\n
### Homepage
https://www.indobenchmark.com/
### Licensing
MIT License
### Speaker Locations
### Sizes
- 0.0045 % of total
- 1.7307 % of id
### BigScience processing steps
#### Filters applied to: id
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- KILL
- filter_small_docs_bytes_300
|
linxinyuan/mind | 2022-06-07T23:12:22.000Z | [
"region:us"
] | linxinyuan | null | null | null | 1 | 13 | Entry not found |
Brendan/yahoo_answers | 2022-06-09T03:57:41.000Z | [
"region:us"
] | Brendan | null | null | null | 1 | 13 | Entry not found |
knkarthick/AMI | 2022-10-24T09:16:01.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10<n<1000",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | knkarthick | null | null | null | 3 | 13 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10<n<1000
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: AMI Corpus
---
# Dataset Card for AMI Corpus
## Dataset Description
### Links
- **Homepage:** https://groups.inf.ed.ac.uk/ami/corpus/
- **Repository:** https://groups.inf.ed.ac.uk/ami/download/
- **Paper:** https://groups.inf.ed.ac.uk/ami/corpus/overview.shtml
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.
#### Synchronised recording devices:
close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, individual pens.
#### Annotation:
orthographic transcription, annotations for many different phenomena (dialog acts, head movement etc. ).
Although the AMI Meeting Corpus was created for the uses of a consortium that is developing meeting browsing technology, it is designed to be useful for a wide range of research areas. The downloads on this website include videos that are suitable for most purposes, but higher resolution videos are available for researchers engaged in video processing.
All of the signals and transcription, and some of the annotations, have been released publicly under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0).
### Languages
English
## Dataset Structure
### Data Instances
AMI Corpus is a meeting summarization dataset, consisting of 279 dialogues split into train, test and validation.
The first instance in the training set:
{'id': '30', 'summary': "The project manager opens the meeting by stating that they will address functional design and then going over the agenda. The industrial designer gives his presentation, explaining how remote controls function and giving personal preference to a clear, simple design that upgrades the technology as well as incorporates the latest features in chip design. The interface specialist gives her presentation next, addressing the main purpose of a remote control. She pinpoints the main functions of on/off, channel-switching, numbers for choosing particular channels, and volume; and also suggests adding a menu button to change settings such as brightness on the screen. She gives preference to a remote that is small, easy to use, and follows some conventions. The group briefly discusses the possibility of using an LCD screen if cost allows it, since it is fancy and fashionable. The marketing expert presents, giving statistical information from a survey of 100 subjects. She prefers a remote that is sleek, stylish, sophisticated, cool, beautiful, functional, solar-powered, has long battery life, and has a locator. They discuss the target group, deciding it should be 15-35 year olds. After they talk about features they might include, the project manager closes the meeting by allocating tasks.", 'dialogue': "Speaker A: Cool. Do you wanna give me the little cable thing? Yeah. Cool. Ah, that's why it won't meet. Okay, cool. Yep, cool. Okay, functional requirements. Alright, yeah. It's working. Cool, okay. So what I have, wh where I've got my information from is a survey where the usability lab um observed remote control use with um a hundred subjects and then they gave them a questionnaire. Um so it was all about, you know, how people feel about the look and feel of the remote control, you know. What's the most annoying things about remote controls and um the possibility of speech recognition and L_C_D_ screens in remote control. Not that they actually gave me any answers on the L_C_D_ screens, so I should have taken that bit out, but anyway. Um okay, so. What they found is that people don't like how current remote controls are, so you know, definitely you should be looking at something quite different. Um seventy five percent of users find most remote controls ugly. Uh the other twenty five percent have no fashion sense. Uh eighty percent of users would spend more to get um you know, a nice looking remote control. Um current remote controls, they don't match the user behaviour well, as you'll see on the next slide. Um I dunno what zapping is, but Oh, right. But you have that little thing that comes up at the bottom and tells you what's on. Um okay, fifty percent of users say they only use ten percent of the buttons, so that's going back to what, you know, we were saying earlier about, you know, do you need all the buttons on the remote control, they just make it look ugly. Okay? Cool. Um so this is my little graph thing. Mm k Okay, well, I can send it to all of you. What it is is um it's cones, 'cause I thought they'd be more exciting. Um but ooh where's it go? Back. Oh. Oh yes, cool. Okay, I'm gonna stop playing with the little pointy thing. Um okay, so like what it shows is how much things are used relatively and what you can clearly see from that is the thing that's used most is the channel selection. What you can't see is volume selection, it's a little bit higher than all the others. Yeah, so what the graph shows is that, you know, power, channel selection and volume selection are important, and the rest of them, you know, nobody really uses and so that's the the numbers along the top represent their like um their importance, you know, so on a scale of one to ten, how important is that and, you know, channel selection and volume selection are absolutely essential, and the power, well it's not quite so essential, apparently, although I don't understand how it couldn't be, um and everything else, I think, you know, you can forget about having those buttons on the remote control, 'cause they're just not needed, and they're not used. Okay. This is the bit that the email messed up for me and that's what I was fiddling about with at the beginning of the thing. Okay, cool. So um okay, so this is what people find annoying about remote controls. Uh that they get lost, that the uh you know, they're not intuitive and that they're bad for repetitive strain injury. I think if you're watching enough T_V_ to get repetitive strain injury from um you know, watching T_V_, then that's the least of your problems, but you know, it's up there. Um that yeah. Okay, so um I mean the the R_S_I_ thing would be that, like when you have the computer keyboards and you keep your wrists up would be something that encourages you want something with an ergonomic t design that encourages good use of the remote control and you know, not straining your wrists watching T_V_. Yes. Okay, cool. Right, um sorry this is pink because I was copying and pasting the table, and I didn't have time to white it out again. Um okay, but that shows how people whether they would pay more for voice recognition software. So you can see from that that, you know, younger people to the age of thirty five are quite likely to pay quite a lot more f well quite are quite likely to pay more for voice recognition software, whereas as people get older, they're a bit more sceptical about it and they're less willing to to try it. Um so clearly voice recognition is something to think about, but um you know I d I do wonder how well that would work given that a T_V_, you know, tends to be people talking and um, you know, how are you going to stop it from just flipping channels whilst watching T_V_. Um okay? Cool. Um okay, so these are my personal preferences. So you have sleek, stylish, sophisticated, you know, so something that's, you know, a bit cool. Um you know, functional, so it's useful, but minimalist. Um there's a there's an important thing that, you know, people use when, you know, when you're filling up your home, you know, a lot of people fill up their home with bits of crap, basically, you know, and you've got all this stuff, and you're just like, what the hell is that, who is ever gonna use it? You know, so things should either be functional or beautiful or preferably both, so I think we need to aim for both. Um okay, then a long battery life, like you were talking about earlier and um, you know, I was thinking that solar power would be quite cool because, you know, your remote control just sits there, and you could just sit it in the sunshine and save the environment a bit. Um and then like a locator, so you know, kind of like you have for a mobile phone or not a mobile phone Yeah, that's it, you know. I know, it's weird. My flatmate and I were talking about this on the way into uni this morning and I was like I need to get one for everything. So yeah, so maybe something where you clap and then it beeps, something a kind of sound that you don't often hear on the T_V_, you know, 'cause you don't want your remote control beeping every five minutes, 'cause you you'd then deliberately lose it by throwing it out the window or something. So okay? Cool. That's me. Cat's. Ca. Yeah, I mean that's the thing is that it didn't say in the survey, you know, whether, you know, these are the people that will pay more for a more stylish remote control, but I'm assuming, you know, yes. Well, that's when you go to uni, isn't it? So, you know Yeah. Oh, I've unplugged it. Do you want me to Yeah. Seventy six point three percent. Yeah. Yeah, I kn I mean I know what you're saying about the fifteen to twenty five year olds, but I mean it has been proven that that people of that age group have a higher disposable income because they don't have like I mean, you know, if you're at university, you're paying your rent, but you don't have a mortgage, you don't have a life insurance policy, you don't normally have a car, yeah, so. You're still learning to drive actually, so that just costs more than a car, but yeah. Um so I mean like it is an age group to target, really, I think. No, I mean that's what, that's like fifteen Pounds? You know, I think Yeah, I d I don't know many people without a T_V_. We didn't have a T_V_ last year, and everyone thought we were off our heads, you know. Yeah, I d well we've we've got quite a d decent T_V_. Yeah. I think I think the fact that, you know, ninety one point two percent of fifteen to twenty five year olds are saying yes, I would pay more for a voice recognition remote control, does say quite a lot really. You know, so I mean that and the disposable income and I don't think it's something to ignore, you know. Is not a massive difference, you know. No, do totally. You do have it in your mobile phone though, don't you? Because you have like I mean every mobile phone now has like call this person and it calls them. I don't know. Yeah. S so y you'd maybe need a code word. Do you know what I mean? So like when you say change, except that's being said quite a lot on T_V_, so maybe like, you know, remote. I mean how often do people say remote on T_V_? Although I only watch Charmed, so really I wouldn't know but like so you'd just say remote five, you know, remote ten, remote one two nine. I don't think there's a lot of uh voice recognition remote controls. Yeah, that would be another way to do it. Yeah, but then the code word would be even more important, because I mean Sky advertise on every channel, don't they, you know, so then it would be you'd be watching Charmed, and then the Sky advert would come on and it would change to Sky. Yeah, yeah, and that would be really annoying. Yeah. Do you not think that defeats the object of having voice recognition on a remote control though? Yeah, you know, so you have to have the remote control. It's more like if you lost it and it's down the sofa sometime, you can yell at it and it'll just change it, you can look for it later, yeah. Yeah, yeah, I suppose nearer to you but a b like if you have surround sound then Yeah. Yeah, 'cause it's it's quite important that you don't lose the the bit to locate the remote control. Yeah, definitely, yeah. Oh, so y you want our um PowerPoint presentations in there, hey? Okay. There you go. But is everyone's called functional requirements? Okay, so that's good. That's me done. Okay, cool.\r\nSpeaker B: No. Mm. Um um wi on on a what? Oh project project documents, yeah, yeah, yeah, okay. Oh okay, yeah. Yes, I think so. Yeah, the last minute, yeah, yeah. Yeah. Um Okay. Hmm. Mm. Okay, yeah, afterwards, yeah, okay. Thanks. I think we need like some general discussion at the end probably. Yeah. Yeah, I think since since we were discussing some um design issues then I I I would like to continue okay, yeah. Thanks. Oh i Okay, I hope wait. Should it just There's just nothing. Oh right, right, right, um Okay. Nothin okay, something is coming up. No signal? Why? Oh. My my computer went blank now. Adjusting. But I don't see anything I don't see anything on my computer now. This is the problem, but Um. Uh now it's okay. No? No. Oh okay. Okay, that's fine, that's good. Okay, let's start from the beginning. So I'm going to speak about technical functions design uh just like some some first issues that came up. Um 'kay, so the method I was um adopting at this point, it's not um for the for the whole um period of the um all the project but it's just at th at this very moment. Um uh my method was um to look at um other um remote controls, uh so mostly just by searching on the web and to see what um functionality they used. And then um after having got this inspiration and having compared what I found on the web um just to think about what the de what the user really needs and what um what the user might desire as additional uh functionalities. And yeah, and then just to um put the main function of the remote control in in words. Um so the findings uh were um that the main function of the remote control is is just sending messages to the television set, so this quite straightforward. And uh w some of the main functions would be switching on, switching off, uh then the user would like to switch the channel um for example just m changing to the next channel to to flip through all all of the possible channels, or then mm uh the other possibility would be that um she might just want to choose one particular channel, so we would need the numbers. And and also the volume is very important. Um um I als okay. 'Kay. Um um among the findings I found that m m most of the curr mm presently available remote controls also include other mm functionalities um in their design, like operating a V_C_R_, but they don't seem to be able to deal with D_V_D_ players, but then there are surely there are many other functionali functions that could possibly be added to them, but according to the last minute update um actually um we do not want to have all this complicated functions added to our design. So my personal preferences would be uh to keep the mm the whole remote control small um just like the physical size. And then it must be easy to use, so it must follow some conventions um like whereabouts you find the on off button and maybe the colour tends to be red or something. Um then yeah, the must-have buttons would be on off and then the channel numbers and then um the one that allows us to go to the next or the previous channel, and then volume has to be there. But then um other functionalities um could be just uh there could be a menu button and you could change things on the screen then, um for example brightness and mm similar functions could be just um done through the menu. And yeah, the last question I had about whether we wanted to incorporate n uh more functionalities, the answer was already no because of the last minute update. So at the for the time being that's uh that's all. If you have questions Yeah, and also it's it's um other question is uh because there are so many different And there are so many different things that could possibly be included because besides video and D_V_D_ there are the mm um video C_D_s and whatever, so it might be problematic to to choose between all these possible things. Um well, I think the buttons are still mm kind of the most um easy for the user to use, I mean um what other options would you have? A little screen or something, but this would be really kind of I think a lot of learning for the user and and I mean the user just wants to get um get a result um quickly, not to spend time in like um giving several orders um I dunno. I think I th I would I would think the put the buttons, but if if you have other mm proposals um. Yeah. Yeah. Mm-hmm. Yep. Uh am I going in the right direction? No. Wait. Okay, here it comes. Okay, here you are. Um that's very good, very interesting. Mm-hmm. Yeah. Yeah, you share a television or something that yeah. It was seventy something, yeah, yeah. Yeah this this is not unaffordable, but the problem is whether people need it, whether they do have a T_V_ to use its full Yeah. Common, the students yeah, yeah. The s the stu yeah, and the remote control might not yeah, it might not even function with the old T_V_. Yeah, we're still yeah. Or w maybe we can just kind of uh uh Yeah, but at the same time I think maybe we can we can just decide to to have both of these groups as our target, because actually I mean they're all still re young people. Yeah. Yeah. Yeah. Yeah. An Yeah. Yeah. Yeah but uh um Yeah, yeah sure, yeah, yeah. Yeah. Yeah, w well now the v the voice recognition if if it works wonderfully w we could possibly do away with all buttons, but I think this is not really the right moment yet, because people are just so used to buttons and um, yeah it's it's kind of safer, so we we need both, so the voice recognition would be just an extra, it wouldn't really reduce the size of the remote. Yeah but m but on the other hand, remote control isn't as close to you you probably might just just uh speak into it and and the T_V_ would be already further away, so it might not pick up the other things coming from there. Yeah, but then the remote control I think I mean um the idea is kind of it's it's not that it's sitting there on on top of the television, because then you could already yell at the television and you wouldn't you you wouldn't need the remote control, so the remote control is still something you keep n near yourself. Yeah, yeah, yeah. No, but I I I was just defending the the fact why why we want to keep the remote control close to us, a and uh not to yell at it from the distance. Okay. Oh yeah, yeah. Okay, yeah, mm-hmm. The major ones, yeah. Mm-hmm. Mm-hmm. Yeah. Did you find it? It's just yeah, yeah. Oh so so we'll just put them i there, we we yeah, w we won't even okay. Yeah. Yeah. Uh something conceptual, yeah. Hmm. Sorry, but um the next meeting um are we going to have it um right after lunch or shall we prepare our To prepare, okay, yeah, that's good. Okay. Cool. Okay, see you.\r\nSpeaker C: Mm. You said uh targ target groups, what does that mean? Uh okay, 'kay. So are Okay. Alright. I can go first, yeah. Right. Um so f from the Right sure. Uh okay. So n uh with uh with regard to the uh working design of this uh uh remote control uh I've identified um a few basic uh components of the remote and uh se uh from the design, functional design perspective um w I c we can now uh know wha what exactly the components are and how how they work together with each other. So this is the method that uh I'll mostly be following in my um in my uh role. Um the identification of the components, uh and uh since since I'm dealing only with the technical aspects, I would need feedback from the marketing person uh and uh from the user interface person. Uh we'll then integrate this into the product design at a technical level and uh basically update and come up with a new design, so it's a cyclical process. Okay, so these were the basic findings from today. The last three bullets have been integrated from uh the last minute uh email. Uh I just quickly jotted them down. Um so basically uh the as I told you the identification of how the remote control works and what are the various parts to it uh and what are the different processes um and how the parts uh communicate with each other. Um okay, so e the mee email said that teletext is now outdated, so we need to do away with that functionality of the remote control. Um also uh the remote control should be used only for television, because incorporating other features um makes it more comp complex. And the reason why teletext is outdated because uh of internet and uh the availability of internet over television. How however, our our remote control would only be dealing uh with the the use for television, in order to keep things simple. Um also the management wants that um our design should be unique uh it so it should incorporate um colour and the slogan uh that our company um has it as its standard. Okay, so he he here is a functional overview of the remote control. Um there's basically an energy source at the heart uh which feeds into the chip and the user interface. The user interf interface communicates with the chip, so I'll basic go over to the Okay. So if uh if this is our energy source and this is a cell, uh it communicates uh it feeds energy into the into the chip, which basically finds out h uh how how to do everything. There is a user interface here. So whe when the user presses a button, it feeds into the chip and the chip then generates a response and takes the response to an infrared terminal, um which then so the output of the chip is an infrared bit code, which is then communicated to the remote site, which h has an infrared receiver. Um the there can be uh a bulb here or something to indicate whether the remote is on or communicating. Um so these are the essent so a all the functionality of the remote control, whatever new functions that we need to do, um make the chip more complicated uh and bigger, basically. Okay. Um so i in my personal preferences um I'm hoping that we can ke keep the design as simple and clear as possible. This would uh help us uh to upgrade our technology at a future point of time. And uh also if we can incorporate uh the latest features in our chip design, so that our um uh remote control does not become outdated soon and it's compatible with mot most uh televisions. That's about it. So anything that you would like to know or No, I don't have any idea about what each component costs. Um yeah. Anything else? Yeah. Certainly, yeah. So so tha yeah, we definitely need to operate within our constraints, but um unfortunately I I do not have any data, so uh I just identified the functional components for that. Yeah, okay. Yeah. Mm 'kay. I it'll take some time. Oh, there it is, yeah. It'll come up, it um uh no signal. Yeah yeah, it says something now, adjusting Okay. Oh, that's strange. Okay. And one more time. Mm. Sorry, cou could you go back for a second? Uh switching on off channel, uh volume, okay, that's great. So in the u user interface requirements uh uh uh we we have been able to identify what are the basic buttons that we do want. Um but um so so at this stage, uh how we go about implementing those button we will not identify or I mean in we can completely do away with buttons and uh have some kind of a fancy user interface or something like that. But uh is is there any uh uh any thoughts on that? Right. Yeah, and it'll make the costs yeah. Right. Uh I think the co costs will also play a big role when we come to know about them. So well we can probably wait until t we have more knowledge on that. Uh i if the if the costs allow, we can have like an L_C_D_ display and uh with um because we do want something fancy and fashionable as well. So yeah? Cool. try to press oh, okay, yep. Mm. Right. Mm-hmm. Mm. Right. Mm-hmm. Hmm. Right. Mm. Mm. Mm. Some kind of a ring, some Right. Hmm. Okay, that's great, thanks. Mm. I think one of the very interesting things that came up in um uh Ka Kate Cat Cat's uh presentation was um uh this this issue of uh uh like voice recognition being more popular with uh younger people. So if we need to have a target group um then uh I think as far as the m motto of our company is concerned, if we want to have something sleek and uh you know, good looking uh we are better off targeting a younger audience then um you know, people who are comparatively elderly. Um. Right. Right. Bu but but the survey did say that f things like voice recognition are more popular with them, so if you want to put in something stylish, then uh th it'll certainly be more popular with this i ye with the younger people as compared to older people, yeah. Right, and Right. Mm. Right. But uh still, if if you can go back to that slide and uh, how popular was it? Oh, oh, okay. That's alright, if you can just look it up on your computer, wh uh um people between twenty five to thirty five, uh how popular was so it was sti still still quite popular amongst them. So even they are seventy six percent, is that high amount? Alright. Yeah. So you're more likely to b Yeah. Yeah. Mm. Bu but even even in the case of twenty five to thirty five it's quite popular, right? So mm uh are are are Mm. Mm. Um I was having a a general outlook on um m most like sophisticated features, but voice recognition itself I'm not very sure about, because one of the p uh things that Cat pointed out was uh uh how do we go about implementing it? Uh and uh Yeah. But how frequently do we use it anyway and um uh h ho how good is it, you know uh voice recognition softwares are still quite uh Yeah. Right. Right. Okay. O Right. Mm. Right. Yeah. Okay, so it seems like a feasible thing to implement uh for for a limited yeah. Mm. W What uh Mm. What wh uh what I was thinking is that there is this uh separation between what the channels are on T_V_ and how they are numbered on the remote control. If we can do with away with that, our product can be really popular uh in the sense that uh a person can say, I want to watch uh I_T_V_ one instead of saying that I want to go onto channel number forty five. Yeah, so if uh if something like that can be incorporated, some kind of Mm-hmm. Alright. Yeah, that's Right. Mm. Mm yeah and it might become very difficult from a distance for the television to understand what you're saying because of the noise factor for the remote control being cl I mean it'll it'll mm. Yeah. Mm. So uh wh another thing uh that can be used is that uh there can be a beeper button on the T_V_, so you can go and press that button and um and the remote control, wherever it is, it'll beep, so we we can probably come to know where it is. Right, yeah, yeah, yeah. Alright, yeah. Right. Okay. So where exactly is this i Ah, okay. Yeah. Yeah, yeah in that one, right yeah. No. Right. I guess I'll find out. Wha what was it again that I was supposed to look into? Con components, oh.\r\nSpeaker D: All hooked up. Okay, so now we are here at the functional design meeting. Um hopefully this meeting I'll be doing a little bit less talking than I did last time 'cause this is when you get to show us what you've been doing individually. The agenda for the meeting, I put it in the sh shared documents folder. I don't know if that meant that you could see it or not. Did anyone? No. Oh well. Um I'll try and do that for the next meeting as well so if you check in there, there's a shared project documents folder. Um and it should be in there. Project documents, yeah. So I'll put it in there. Is it best if I send you an email maybe, to let you know it's there? Yep. I'll do that next time. Um I'll act as secretary for this meeting and just take minutes as we go through, and then I'll send them to you after the meeting. The main the main focus of this meeting is your presentations that you've been preparing during the time, so we'll go through each of you one by one. Um then we need to briefly discuss the new project requirements that were sent to us. I just sent at the last minute, I'm sorry about that, but we can see how that affects what you were you were doing. Um and then we need to, by the end of the meeting come to some kind of decision on who our target group's going to be and what the functions of the remote control that's the the main goal is to come up with those two things, target group and functions of the remote control. And we've got forty minutes to do that in. So I would say yeah? As uh who it is that we're going to be trying to sell this thing to, yeah. So we need to yeah, we need to have a fairly defined group that that we want to focus on and then look at the functions um of the dem remote control itself. So with that I think it's best if I hand over to you. Does anyone have a preference for going first? You wanna go first? Okay, so we need to unplug my laptop and plug in yours. I assume we just pull it out? Just before you start, to make it easier, would you three mind emailing me your presentations? Once we you don't have to do it now but when once you go back, just so that I don't have to scribble everything down. Hmm. Mm-hmm. Okay. Do you have any um i idea about costs at this point? Br Okay. 'Cause that's something to consider, I guess, if we're if we're using more advanced technology, it might increase the price. Yeah. That's fine. Are there any more questions, or shall we just skip straight to the next one and then we can discuss all of them together at the end? Yeah, I think that will do. Okay, so do you want to Yes, shall shall we pull this up? I think that has to come out of there. Yeah. Yeah, I thought those last minute things, they're gonna hit you the worst. It ta takes a little Oh, and have you you need to then also press on yours, function F_ eight, so the blue function key at the bottom and F_ eight. Now it's coming, computer no signal. Maybe again? Okay, adjusting. There we go, there we go. Oh, if you press if you press function and that again there's there's usually three modes, one where it's only here, one where it's only there, and one where it's both. Okay, so one more time. Should yeah just wait for a moment, adjusting. Okay. Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. Yeah. If I mean that was the the directive that came through from management, but if we had a a decent case for that we really think it's important to include video and D_V_D_, I could get back to them and see. It's w it's just whether it's worth arguing about. Mm-hmm. Yeah. Mm-hmm. Okay. Are there any questions for clarification of Maarika before we go on to the next one? Mm-hmm. Mm. Mm. Mm-hmm. Sure, we can discuss that maybe after the next one. Do you want to yeah. Oh, I'm getting hungry. You set? Uh we need to do the function key thing so that it comes up on here. Hello. Is it plugged in prop it's working? Okay. Excellent. It's um switching between channels, sort of randomly going through. Mm. Ooh, that's a bit difficult to see. If you explain it to us it'll be fine. Yeah. I liked the, I liked the litt ooh come back. No. Okay. Mm-hmm, that's the next one along, yeah? Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. The remote control. Mm-hmm. That's alright. Mm. Keys and things like that, yeah. Whistle and it screams at you, yeah. Mm-hmm. That's you, excellent. Um. I'm just gonna tick yes. So, we've got about ten, fifteen minutes to discuss Mm-hmm. Yeah. Mm-hmm. Yeah. Then again I guess the th where it was most popular was the fifteen to twenty five bracket and the I don't know how often they're buying televisions. Yeah, but you don't have much money, generally. I would've thought it's it's more that twenty five to thirty five, when people are really moving out and they've got their first job and they want their nice toys and O oh it's on sorry, we unplugged it. Here, let me Yeah. Mm-hmm. Yeah. Yeah, they've got no commitments and usually not a car and all of those things. Kids. Yeah. Yeah, and if we're if we're talking twenty five Euros as a price, that's not unaffordable, even for young people. Yeah. Yeah. But do they But the T_V_s are often kind of someone's old T_V_ that's blah blah and be a bit strange to have a fancy rome remote. Mm. Yeah. Yeah. Yeah. Yeah. Yeah, if we ta if we take fifteen to thirty five, but that then does imply that we should try and incorporate voice recognition. Is that gonna have a an implication for the technical specs? Mm-hmm. Yeah. Yeah. With um but with a T_V_ remote it's gonna be quite limited if we're t saying the main things people want to do is on off channel five, louder, tha that should be relatively simple. Mm. Yeah. Mm-hmm. Yeah, but maybe if you wanna look into that just to just to check. Um, so if we go for the the fifteen to thirty five age group and then of course we're going to get th anyone who's older than thirty five who wants to look young and hip and trendy and has the money, then they'll they'll still go for the same advertising. Yeah, I think we need both. Yeah. Mm. Uh-huh. Uh-huh. So that if that was in the the voice recognition, that would be great. Yeah. Yeah. Watch Sky and yeah. Mm-hmm. But that's definitely a possibility. Yeah. So that you can yell at it, yeah. Yeah. Alright. Mm. Yeah. Yeah. Yeah. Yeah. Mm-hmm. That's but then if you're buying the remote separately, but y you could have something, but i if it was something that you could like stick onto the T_V_ or something, some like a two p if you bought it in a two part pack, so one part attaches to the T_V_. The l Well that's right, but it solves the problem of having different noises. Yeah. Okay, I think we're gonna have to wrap this up um. But if we go away with that that kind of general um specification in mind that we're looking at fifteen to thirty five year olds, we want it to look simple, but still have the buttons so it's easy to use, but only those key buttons, the major buttons and then one sort of menu one, and then voice recognition included as an option um but that obviously needs a little bit more working out as to whether it's really feasible and some of those problems we were mentioning um. What we have to do now is to go back to our little places, complete our questionnaire and some sort of summarisation, which y you'll get immediately by email. Send me your presentations so that I can use them to make the minutes, and then we've got a lunch break and after lunch we go back to our own little stations and have thirty minutes more work. Um I'll put the minutes in that project documents folder, but I'll send you an email when I do it, so that you know. It should be on your desktop, so on the yeah. So I'll put it I'll put them there as soon as I've written them. Yeah, and email them round. Yeah, that would be great. Oh yeah, put them in there. Yeah, then you don't have to email them. No, they're all called something slightly different. Technical requirements and something something, yeah. So, if you put them in there, we'll all be able to see them and refer to them if we need to. Um as to where we're going from here, you're going to look at the components concept. Yeah? Whatever that means. Yeah. You'll be looking you'll be looking at the user interface concept, on something conceptual and you're watching trends to see how we go and surely voice recognition'll fall off the map or something that um we'll keep keep our options op hmm? Components, yeah. No, we have we have after lunch we have thirty minutes to ourselves to prepare, so that's fine, w before lunch we just have to complete the questionnaire and some sort of summary. Okay? Right on time. Okay, so you can I guess we'll see you for lunch in a sec?"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 209
- val: 42
- test: 28
## Dataset Creation
### Curation Rationale
Refer Above.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
non-commercial licence: cc-by-4.0
## Citation Information
```
Carletta, J. (2006) Announcing the AMI Meeting Corpus. The ELRA Newsletter 11(1), January-March, p. 3-5
```
## Contributions
Thanks to Carletta for adding this dataset. |
shahidul034/text_generation_model_data | 2022-07-03T09:53:36.000Z | [
"region:us"
] | shahidul034 | null | null | null | 1 | 13 | Entry not found |
its5Q/panorama | 2022-08-05T18:18:10.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ru",
"license:unknown",
"news",
"articles",
"newspapers",
"pano... | its5Q | null | null | null | 5 | 13 | ---
annotations_creators:
- no-annotation
language:
- ru
language_creators:
- other
license:
- unknown
multilinguality:
- monolingual
pretty_name: Dataset of satirical news from "Panorama", Russian "The Onion".
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- news
- articles
- newspapers
- panorama
task_categories:
- text-generation
task_ids:
- language-modeling
---
### Dataset Summary
Dataset of satirical news from "Panorama", Russian "The Onion".
### Dataset Format
Dataset is in JSONLines format, where "title" is the article title, and "body" are contents of the article. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.