id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
mwinn99/biovdb_1000 | 2023-08-28T22:09:14.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1k",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"biology",
"region:us"
] | mwinn99 | null | null | null | 0 | 11 | ---
license: cc-by-4.0
task_categories:
- tabular-classification
pretty_name: Biovdb
size_categories:
- n<1k
- 1K<n<10K
viewer: false
tags:
- biology
---
# Biovdb
Test set of ~1000 samples from GEO.
|
ambushburn/burn-classification | 2023-09-20T16:21:04.000Z | [
"region:us"
] | ambushburn | null | null | null | 0 | 11 | Entry not found |
yurakuratov/example_promoters_300 | 2023-08-29T09:33:54.000Z | [
"region:us"
] | yurakuratov | null | null | null | 0 | 11 | Entry not found |
ChristophSchuhmann/movie-clips | 2023-09-06T09:28:59.000Z | [
"region:us"
] | ChristophSchuhmann | null | null | null | 0 | 11 | Entry not found |
edmundtsou/keywords_daily_dialog | 2023-09-05T00:17:00.000Z | [
"region:us"
] | edmundtsou | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: dialog
sequence: string
- name: ids
dtype: int64
- name: keywords
sequence:
sequence: string
splits:
- name: train
num_bytes: 10163143
num_examples: 13118
download_size: 5240789
dataset_size: 10163143
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "keywords_daily_dialog"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NgThVinh/dsc_model | 2023-09-05T08:27:26.000Z | [
"region:us"
] | NgThVinh | null | null | null | 0 | 11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: document
dtype: string
- name: claim
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 113122532.72787674
num_examples: 132448
- name: test
num_bytes: 28281487.272123266
num_examples: 33113
download_size: 89644483
dataset_size: 141404020.0
---
# Dataset Card for "dsc_model"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
winterForestStump/10-K_sec_filings | 2023-10-03T19:39:24.000Z | [
"region:us"
] | winterForestStump | null | null | null | 1 | 11 | ---
dataset_info:
features:
- name: cik
dtype: int64
- name: company_name
dtype: string
- name: filing_date
dtype: timestamp[ns]
- name: Business
dtype: string
- name: Risk Factors
dtype: string
- name: Unresolved Staff Comments
dtype: string
- name: Properties
dtype: string
- name: Legal Proceedings
dtype: string
- name: Mine Safety Disclosures
dtype: string
- name: Market for Registrant’s Common Equity, Related Stockholder Matters and Issuer
Purchases of Equity Securities
dtype: string
- name: Selected Financial Data
dtype: string
- name: Management’s Discussion and Analysis of Financial Condition and Results
of Operations
dtype: string
- name: Quantitative and Qualitative Disclosures about Market Risk
dtype: string
- name: Financial Statements and Supplementary Data
dtype: string
- name: Changes in and Disagreements with Accountants on Accounting and Financial
Disclosure
dtype: string
- name: Controls and Procedures
dtype: string
- name: Other Information
dtype: string
- name: Directors, Executive Officers and Corporate Governance
dtype: string
- name: Executive Compensation
dtype: string
- name: Security Ownership of Certain Beneficial Owners and Management and Related
Stockholder Matters
dtype: string
- name: Certain Relationships and Related Transactions, and Director Independence
dtype: string
- name: Principal Accountant Fees and Services
dtype: string
- name: Exhibits, Financial Statement Schedules
dtype: string
splits:
- name: '001'
num_bytes: 1305976147
num_examples: 5000
- name: '002'
num_bytes: 1547107096
num_examples: 5000
- name: '003'
num_bytes: 1500950344
num_examples: 5000
- name: '004'
num_bytes: 938669696
num_examples: 3000
- name: '005'
num_bytes: 1161187900
num_examples: 4000
- name: '006'
num_bytes: 937988835
num_examples: 3000
- name: '007'
num_bytes: 694775532
num_examples: 2000
- name: '008'
num_bytes: 866183252
num_examples: 3000
- name: '009'
num_bytes: 705057218
num_examples: 3000
- name: '010'
num_bytes: 705057218
num_examples: 3000
- name: '011'
num_bytes: 885667244
num_examples: 2000
- name: '012'
num_bytes: 329414277
num_examples: 2000
- name: '013'
num_bytes: 739146986
num_examples: 3000
- name: '014'
num_bytes: 458266896
num_examples: 1000
- name: '015'
num_bytes: 710988934
num_examples: 2000
- name: '016'
num_bytes: 250689742
num_examples: 2000
- name: '017'
num_bytes: 474864951
num_examples: 2000
- name: '018'
num_bytes: 615827939
num_examples: 2000
- name: '019'
num_bytes: 357457451
num_examples: 1000
- name: '020'
num_bytes: 584057786
num_examples: 2000
- name: '021'
num_bytes: 141712850
num_examples: 2000
- name: '022'
num_bytes: 503977366
num_examples: 2000
- name: '023'
num_bytes: 468353001
num_examples: 2000
- name: '024'
num_bytes: 450924639
num_examples: 1000
- name: '025'
num_bytes: 504057453
num_examples: 2000
- name: '026'
num_bytes: 169593248
num_examples: 2000
- name: '027'
num_bytes: 464799632
num_examples: 2000
- name: '028'
num_bytes: 297637001
num_examples: 1000
- name: '029'
num_bytes: 368760540
num_examples: 1000
- name: '030'
num_bytes: 319606303
num_examples: 1000
- name: '031'
num_bytes: 394028378
num_examples: 2000
- name: '032'
num_bytes: 343965348
num_examples: 2000
- name: '033'
num_bytes: 522452994
num_examples: 1999
- name: '034'
num_bytes: 509087440
num_examples: 1000
- name: '035'
num_bytes: 509775862
num_examples: 1001
- name: '036'
num_bytes: 437503604
num_examples: 1000
- name: '037'
num_bytes: 610792518
num_examples: 2000
- name: '038'
num_bytes: 581885486
num_examples: 2000
- name: '039'
num_bytes: 350277811
num_examples: 1000
- name: '040'
num_bytes: 627141247
num_examples: 1500
- name: '041'
num_bytes: 305018992
num_examples: 700
- name: '042'
num_bytes: 555710158
num_examples: 600
- name: '043'
num_bytes: 593433327
num_examples: 500
- name: '044'
num_bytes: 352017311
num_examples: 700
- name: '045'
num_bytes: 342614047
num_examples: 1000
- name: '046'
num_bytes: 323563296
num_examples: 1000
- name: '047'
num_bytes: 236981244
num_examples: 1000
- name: '048'
num_bytes: 622649279
num_examples: 1000
- name: '049'
num_bytes: 358151664
num_examples: 1000
- name: '050'
num_bytes: 661144363
num_examples: 1000
- name: '051'
num_bytes: 421673110
num_examples: 400
- name: '052'
num_bytes: 317359748
num_examples: 100
download_size: 13361256647
dataset_size: 29477068619
configs:
- config_name: default
data_files:
- split: '001'
path: data/001-*
- split: '002'
path: data/002-*
- split: '003'
path: data/003-*
- split: '004'
path: data/004-*
- split: '005'
path: data/005-*
- split: '006'
path: data/006-*
- split: '007'
path: data/007-*
- split: '008'
path: data/008-*
- split: '009'
path: data/009-*
- split: '010'
path: data/010-*
- split: '011'
path: data/011-*
- split: '012'
path: data/012-*
- split: '013'
path: data/013-*
- split: '014'
path: data/014-*
- split: '015'
path: data/015-*
- split: '016'
path: data/016-*
- split: '017'
path: data/017-*
- split: '018'
path: data/018-*
- split: '019'
path: data/019-*
- split: '020'
path: data/020-*
- split: '021'
path: data/021-*
- split: '022'
path: data/022-*
- split: '023'
path: data/023-*
- split: '024'
path: data/024-*
- split: '025'
path: data/025-*
- split: '026'
path: data/026-*
- split: '027'
path: data/027-*
- split: '028'
path: data/028-*
- split: '029'
path: data/029-*
- split: '030'
path: data/030-*
- split: '031'
path: data/031-*
- split: '032'
path: data/032-*
- split: '033'
path: data/033-*
- split: '034'
path: data/034-*
- split: '035'
path: data/035-*
- split: '036'
path: data/036-*
- split: '037'
path: data/037-*
- split: '038'
path: data/038-*
- split: '039'
path: data/039-*
- split: '040'
path: data/040-*
- split: '041'
path: data/041-*
- split: '042'
path: data/042-*
- split: '043'
path: data/043-*
- split: '044'
path: data/044-*
- split: '045'
path: data/045-*
- split: '046'
path: data/046-*
- split: '047'
path: data/047-*
- split: '048'
path: data/048-*
- split: '049'
path: data/049-*
- split: '050'
path: data/050-*
- split: '051'
path: data/051-*
- split: '052'
path: data/052-*
---
# Dataset Card for "10-K_sec_filings"
Dataset of 93.5K 10K SEC EDGAR filings since 1999 year. This dataset contains a lot of bad parsed filings and also empty rows
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dongyoung4091/hh-rlhf_with_features_rx_reformatted | 2023-09-06T14:37:45.000Z | [
"region:us"
] | dongyoung4091 | null | null | null | 0 | 11 | Entry not found |
shishir-dwi/News-Article-Categorization_IAB | 2023-09-09T12:10:09.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"news articles",
"IAB categories",
"dataset",
"articles",
"IAB",
"region:us"
] | shishir-dwi | null | null | null | 0 | 11 | ---
license: apache-2.0
task_categories:
- text-classification
- text-generation
language:
- en
tags:
- news articles
- IAB categories
- dataset
- articles
- IAB
pretty_name: IAB categorization Dataset
size_categories:
- 100K<n<1M
---
# Article and Category Dataset

## Overview
This dataset contains a collection of articles, primarily news articles, along with their respective IAB (Interactive Advertising Bureau) categories. It can be a valuable resource for various natural language processing (NLP) tasks, including text classification, text generation, and more.
## Dataset Information
- **Number of Samples:** 871,909
- **Number of Categories:** 26
### Column Information
- **text:** The text of the article.
- **target:** The IAB category label corresponding to the article.
## IAB Categories
The Interactive Advertising Bureau (IAB) categories are a standardized taxonomy used in the advertising industry to categorize digital advertising content. These categories help advertisers and marketers target their audience more effectively. Each category is represented by a label or code that indicates the content's topic or theme.
## Potential Use Cases
- **Text Classification:** Use this dataset to train and evaluate text classification models to predict IAB categories for articles.
- **Text Generation:** Utilize the articles in this dataset as a source for text generation tasks, such as generating news headlines or summaries.
- **Topic Modeling:** Explore the dataset to discover underlying topics and themes in the articles.
- **Information Retrieval:** Build search engines or recommendation systems that use article content and categories to retrieve relevant articles for users.
## Data Format
The dataset is provided in a standard tabular format with two columns: "text" and "target". You can easily load and manipulate the data using popular data manipulation libraries such as pandas in Python.
## License
This dataset is available under the [Apache 2.0 License](LICENSE.md). Please review the license before using the dataset for any purpose.
|
jbhatab/medical-dataset | 2023-09-10T19:33:56.000Z | [
"license:mit",
"region:us"
] | jbhatab | null | null | null | 0 | 11 | ---
license: mit
---
|
JayKen/ysf2 | 2023-09-21T10:42:30.000Z | [
"region:us"
] | JayKen | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: Name
dtype: string
- name: Company
dtype: string
- name: linkedin
dtype: string
- name: concern
dtype: string
- name: narrative
dtype: string
splits:
- name: train
num_bytes: 1598
num_examples: 5
download_size: 3947
dataset_size: 1598
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ysf2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
varshmani/Data_Description | 2023-09-11T11:29:54.000Z | [
"license:other",
"region:us"
] | varshmani | null | null | null | 0 | 11 | ---
license: other
---
|
mohamedemam/Arabic-samsum-dialogsum | 2023-09-11T14:35:29.000Z | [
"task_categories:summarization",
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:ar",
"license:cc-by-nc-2.0",
"arxiv:1911.12237",
"region:us"
] | mohamedemam | null | null | null | 1 | 11 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 27913254
num_examples: 24813
download_size: 13968520
dataset_size: 27913254
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-nc-2.0
task_categories:
- summarization
- conversational
language:
- ar
pretty_name: ar messum
size_categories:
- 10K<n<100K
---
# Dataset Card for "Arabic-samsum-dialogsum"
this dataset is comption between samsum and dialogsum dataset translated in arabic
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://arxiv.org/abs/1911.12237v2
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Arabic
## Dataset Structure
t
### Data Instances
The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique id of an example.
### Data Splits
- train: 24732
## Dataset Creation
### Curation Rationale
In paper:
> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
### Source Data
#### Initial Data Collection and Normalization
In paper:
> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
#### Who are the source language producers?
linguists
### Annotations
#### Annotation process
In paper:
> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
#### Who are the annotators?
language experts
### Personal and Sensitive Information
None, see above: Initial Data Collection and Normalization
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
### Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
### Contributions
Thanks to [@cccntu](https://github.com/cccntu) for adding this dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MichaelAI23/hotel_requests | 2023-09-11T13:28:30.000Z | [
"license:apache-2.0",
"region:us"
] | MichaelAI23 | null | null | null | 0 | 11 | ---
license: apache-2.0
---
|
CreatorPhan/QA_6_2048 | 2023-09-11T15:47:32.000Z | [
"region:us"
] | CreatorPhan | null | null | null | 0 | 11 | Entry not found |
pietrolesci/amazoncat-13k | 2023-10-02T18:01:14.000Z | [
"region:us"
] | pietrolesci | null | null | null | 1 | 11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
- config_name: labels
data_files:
- split: train
path: labels/train-*
dataset_info:
- config_name: default
features:
- name: uid_original
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: target_ind
sequence: int64
- name: target_rel
sequence: float64
- name: text
dtype: string
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 3262662835
num_examples: 1186239
- name: test
num_bytes: 842174854
num_examples: 306782
download_size: 2560646204
dataset_size: 4104837689
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 1836297972
num_examples: 1186239
- name: test
num_bytes: 474898536
num_examples: 306782
download_size: 3228756828
dataset_size: 2311196508
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 3658361076
num_examples: 1186239
- name: test
num_bytes: 946115688
num_examples: 306782
download_size: 5524926640
dataset_size: 4604476764
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 3658361076
num_examples: 1186239
- name: test
num_bytes: 946115688
num_examples: 306782
download_size: 5524904909
dataset_size: 4604476764
- config_name: labels
features:
- name: labels
dtype: string
splits:
- name: train
num_bytes: 243277
num_examples: 13331
download_size: 160461
dataset_size: 243277
---
# Dataset Card for "amazoncat-13k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
922-CA/lf2_09122023_test1 | 2023-09-22T08:08:59.000Z | [
"license:openrail",
"region:us"
] | 922-CA | null | null | null | 0 | 11 | ---
license: openrail
---
# Lora FMG-9 (LLaMA2) 09122023 test 1
* Dataset of FMG-9 dialogue from Girls' Frontline
* Manually edited to turn into multi-turn dialogue |
pratik33/korean_STT | 2023-09-12T12:07:02.000Z | [
"region:us"
] | pratik33 | null | null | null | 0 | 11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 155417701.0
num_examples: 200
download_size: 152729272
dataset_size: 155417701.0
---
# Dataset Card for "korean_STT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hans12Wurst123/test-llama2-nuv | 2023-09-12T12:59:24.000Z | [
"region:us"
] | Hans12Wurst123 | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 60718
num_examples: 331
download_size: 10794
dataset_size: 60718
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test-llama2-nuv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Kranajan/test-01-00 | 2023-09-27T16:53:27.000Z | [
"task_categories:conversational",
"size_categories:n<1K",
"language:es",
"region:us"
] | Kranajan | null | null | null | 0 | 11 | ---
language:
- es
pretty_name: test amco
size_categories:
- n<1K
task_categories:
- conversational
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_examples: 284
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ali-C137/Goud-Sum-Instruct | 2023-09-12T19:22:47.000Z | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:ar",
"license:apache-2.0",
"region:us"
] | Ali-C137 | null | null | null | 0 | 11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 329002522
num_examples: 139288
- name: validation
num_bytes: 22449821
num_examples: 9497
- name: test
num_bytes: 22447355
num_examples: 9497
download_size: 170777466
dataset_size: 373899698
license: apache-2.0
task_categories:
- summarization
language:
- ar
size_categories:
- 100K<n<1M
---
# Dataset Card for Goud-Sum-Instruct
Goud-Sum-Instruct is a meticulously curated dataset originating from [Goud-sum](https://huggingface.co/datasets/Goud/Goud-sum) dataset, This dataset is primed for fine-tuning chat and instruct models, without any compromise to the existing training mode. This strategic approach enables the specific training of models to respond effectively to the main instruction which is "To Summarise". In conclusion, this dataset is meant to finetune a chat model in order to serve later as a summarizer.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Goud.ma: a News Article Dataset for Summarization in Moroccan Darija](https://openreview.net/forum?id=BMVq5MELb9)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Goud-Sum-Instruct contains 158k articles and their headlines extracted from [Goud.ma](https://www.goud.ma/) news website. The articles are written in the Arabic script. All headlines are in Moroccan Darija, while articles may be in Moroccan Darija, in Modern Standard Arabic, or a mix of both (code-switched Moroccan Darija).
### Supported Tasks and Leaderboards
Text Summarization
### Languages
- Moroccan Arabic (Darija)
- Modern Standard Arabic
## Dataset Structure
### Data Instances
The dataset consists of article-headline pairs in string format.
### Data Fields
- article: a string containing the body of the news article
- headline: a string containing the article's headline
- categories: a list of string of article categories
### Data Splits
Goud-Sum-Instruct dataset has 3 splits: _train_, _validation_, and _test_. Below are the number of instances in each split.
| Dataset Split | Number of Instances in Split |
| ------------- | ---------------------------- |
| Train | 139,288 |
| Validation | 9,497 |
| Test | 9,497 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The text was written by journalists at [Goud](https://www.goud.ma/).
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{issam2022goudma,
title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija},
author={Abderrahmane Issam and Khalil Mrini},
booktitle={3rd Workshop on African Natural Language Processing},
year={2022},
url={https://openreview.net/forum?id=BMVq5MELb9}
}
```
### Contributions
Thanks to [@issam9](https://github.com/issam9) and [@KhalilMrini](https://github.com/KhalilMrini) for adding the original [dataset](https://huggingface.co/datasets/Goud/Goud-sum) |
jppgks/twitter-financial-news-sentiment | 2023-09-13T22:05:58.000Z | [
"license:mit",
"region:us"
] | jppgks | null | null | null | 0 | 11 | ---
license: mit
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 1906560
num_examples: 9543
- name: validation
num_bytes: 479540
num_examples: 2388
download_size: 728648
dataset_size: 2386100
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
[zeroshot/twitter-financial-news-sentiment](https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment) prepared for LLM fine-tuning
by adding an `instruction` column and mapping the label from numeric to string (`{0:"negative", 1:'positive', 2:'neutral'}`).
[Source](https://github.com/AI4Finance-Foundation/FinGPT/blob/master/fingpt/FinGPT-v3/data/making_data.ipynb)
```python
from datasets import load_dataset
import datasets
from huggingface_hub import notebook_login
notebook_login()
ds = load_dataset('zeroshot/twitter-financial-news-sentiment')
num_to_label = {
0: 'negative',
1: 'positive',
2: 'neutral',
}
instruction = 'What is the sentiment of this tweet? Please choose an answer from {negative/neutral/positive}.'
# Training split
ds_train = ds['train']
ds_train = ds_train.to_pandas()
ds_train['label'] = ds_train['label'].apply(num_to_label.get)
ds_train['instruction'] = instruction
ds_train.columns = ['input', 'output', 'instruction']
ds_train = datasets.Dataset.from_pandas(ds_train)
ds_train.push_to_hub("twitter-financial-news-sentiment")
# Validation split
ds_valid = ds['validation']
ds_valid = ds_valid.to_pandas()
ds_valid['label'] = ds_valid['label'].apply(num_to_label.get)
ds_valid['instruction'] = instruction
ds_valid.columns = ['input', 'output', 'instruction']
ds_valid = datasets.Dataset.from_pandas(ds_valid, split='validation')
ds_valid.push_to_hub("twitter-financial-news-sentiment", split='validation')
```
|
atmallen/truth-tagged-oasst-alpaca | 2023-09-14T00:52:39.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 11 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
dataset_info:
features:
- name: message_id
dtype: string
- name: s_idx
dtype: int64
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: validation
num_bytes: 232102
num_examples: 197
download_size: 61886
dataset_size: 232102
---
# Dataset Card for "truth-tagged-oasst-alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davidadamczyk/election2 | 2023-09-14T07:12:25.000Z | [
"region:us"
] | davidadamczyk | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: text
dtype: string
- name: text_label
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 108283.95478723405
num_examples: 526
- name: test
num_bytes: 46525.04521276596
num_examples: 226
download_size: 84563
dataset_size: 154809.0
---
# Dataset Card for "election2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mtc/faithfulness_benchmark_sanity_check_factcc | 2023-09-15T14:54:38.000Z | [
"region:us"
] | mtc | null | null | null | 0 | 11 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: claim
dtype: string
- name: is_faithful
dtype: bool
- name: filepath
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 786411
num_examples: 189
download_size: 334385
dataset_size: 786411
---
# Dataset Card for "faithfulness_benchmark_sanity_check_factcc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huangyt/FINETUNE4 | 2023-09-16T06:02:11.000Z | [
"license:openrail",
"region:us"
] | huangyt | null | null | null | 0 | 11 | ---
license: openrail
---

# 📔 **DATASET**
| **Dataset** | Class | Number of Questions |
| ------- | ----------------------------------------------------------------- | ------------------------ |
| **FLAN_CoT(zs)** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense | 8000 |
| **Prm800k** | Reasoning 、 MATH | 6713 |
| **ScienceQA** | ScienceQA | 5177 |
| **SciBench** | ScienceQA | 695 |
| **ReClor** | Reasoning | 1624 |
| **TheoremQA** | Commonsense 、 MATH 、 ScienceQA | 800 |
| **OpenBookQA** | Text_Understanding 、 Reasoning 、 Commonsense 、 ScienceQA | 5957 |
| **ARB** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense 、 Text_Understanding | 605 |
| **Openassistant-guanaco** | Commonsense 、 Text_Understanding 、 Reasoning | 802 |
| **SAT** | Text_Understanding 、 Reasoning 、 MATH | 426 |
| **GRE、GMAT** | Reasoning 、 MATH | 254 |
| **AMC、AIME** | Reasoning 、 MATH | 1000 |
| **LSAT** | Reasoning 、 LAW | 1009 |
| **Gaokao-biology** | Comprehensive | 210 |
| **Gaokao-chemistry** | Comprehensive | 207 |
| **Gaokao-chinese** | Comprehensive | 246 |
| **Gaokao-english** | Comprehensive | 306 |
| **Gaokao-geography** | Comprehensive | 199 |
| **Gaokao-mathcloze** | Comprehensive | 118 |
| **Gaokao-mathqa** | Comprehensive | 351 |
| **Gaokao-physics** | Comprehensive | 200 |
| **LogiQA** | Reasoning | 651 |
| **LeetCode** | Reasoning 、 Code | 2359 |
# 📌 **Methon**
## *Improving the dataset*
Based on the content of the "Textbooks are all you need" paper, We want to try fine-tuning using advanced questions.
## *Dataset Format Definition*
Use "instruction、input、output" tend to lean towards guided datasets. In this format, each sample includes an instruction, an input, and an expected output. The instruction provides guidance on how to process the input to generate the output. This format of dataset is often used to train models to perform specific tasks, as they explicitly indicate the operations the model should perform.
```
{
"input": "",
"output": "",
"instruction": ""
}
```
- ### [FLAN_V2 COT(ZS)](https://huggingface.co/datasets/conceptofmind/cot_submix_original/tree/main)
We only extract the 'zs_opt' from COT and categorize each task.
- ### SAT、GRE、GMAT、AMC、AIME、LSAT
We will configure the input for datasets such as GRE, GMAT, SAT etc. as "Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation." Meanwhile, for the math input, it will be set as "Please provide the answer along with a corresponding explanation based on the given question." Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique.
Furthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output.
Lastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output.
- ### Gaokao
Most of the inputs are configured by us:
"Please read and understand the requirements and content of the question carefully, and then choose the option that best fits the description of the question or best answers the question from the options provided."
Only gaokao-mathcloze is configured by us:
"Please read and comprehend the requirements and content of the question carefully. Gradually ponder upon it and present the most appropriate answer based on your judgment."
- ### LeetCode
Input configuration:
"Analyze the problem description and constraints, then develop a step-by-step Python function to generate the expected output based on the given inputs. Include brief explanations at each step to illustrate your solution process."
- ### LogiQA
Only perform general conversion
- ### [OTHER](https://github.com/arielnlee/Platypus/tree/main/data_pipeline)
Prm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.
## *Sampling Algorithms*
Since the flan_v2 cot dataset includes tasks like:
- cot_esnli
- cot_strategyqa
- cot_qasc
- stream_qed
- cot_gsm8k
- cot_ecqa
- cot_creak
- stream_aqua
To ensure this dataset contains diverse high-quality data, we first select zs_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling.
```py
import json
import random
with open("cot_ORIGINAL.json", "r") as f:
abc = json.load(f)
# --- part1 ---
zsopt_data = [] # "zs_opt"
for i in abc :
if i["template_type"] == "zs_opt":
zsopt_data.append(i)
# --- part2 ---
output_lengths = [len(i["targets"]) for i in zsopt_data]
average_length = sum(output_lengths) / len(output_lengths) # average length
filtered_data = []
for a in zsopt_data:
if len(a["targets"]) >= average_length:
filtered_data.append(a) # output length need to >= average_length
class_counts = {} # Count the number of samples for each class
for a in filtered_data:
task_name = a["task_name"]
if task_name in class_counts:
class_counts[task_name] += 1
else:
class_counts[task_name] = 1
# --- part3 ---
total_samples = 8000 # we plan to select a total of 8000 samples
sample_ratios = {}
for task_name, count in class_counts.items():
sample_ratios[task_name] = count / len(filtered_data)
sample_sizes = {}
for task_name, sample_ratio in sample_ratios.items():
sample_sizes[task_name] = round(sample_ratio * total_samples)
stratified_samples = {} # Perform stratified sampling for each class
for task_name, sample_size in sample_sizes.items():
class_samples = []
for data in filtered_data:
if data["task_name"] == task_name:
class_samples.append(data)
selected_samples = random.sample(class_samples, sample_size)
stratified_samples[task_name] = selected_samples
final_samples = [] # Convert to the specified format
for task_name, samples in stratified_samples.items():
for sample in samples:
final_samples.append(
{
"input": "", # use ""
"output": sample["targets"], # output
"instruction": sample["inputs"], # question
}
)
with open("cot_change.json", "w") as f:
json.dump(final_samples, f, indent=2)
```
LSAT arranged according to LEVEL
```py
import json
with open("math-json.json", "r", encoding="utf-8") as f:
data_list = json.load(f)
sorted_data = sorted(data_list, key=lambda x: x["other"]["level"])
output_data = [
{
"input": "Please provide the answer along with a corresponding explanation based on the given question.",
"output": f"{item['answer']},solution:{item['other']['solution']}",
"instruction": item["question"],
}
for item in sorted_data
]
with open("math_convert.json", "w", encoding="utf-8") as output_file:
json.dump(output_data, output_file, ensure_ascii=False, indent=4)
``` |
dell-research-harvard/associating-press | 2023-09-15T23:06:20.000Z | [
"license:cc-by-2.0",
"region:us"
] | dell-research-harvard | null | null | null | 0 | 11 | ---
license: cc-by-2.0
---
|
JAYASWAROOP/mining_rules_data | 2023-09-20T10:56:22.000Z | [
"task_categories:question-answering",
"language:en",
"license:cc",
"region:us"
] | JAYASWAROOP | null | null | null | 0 | 11 | ---
task_categories:
- question-answering
language:
- en
license: cc
--- |
quocanh34/test_result_with_regex_v2 | 2023-09-18T09:16:31.000Z | [
"region:us"
] | quocanh34 | null | null | null | 0 | 11 | Entry not found |
Falah/story_1_prompts | 2023-09-23T10:18:17.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 3199
num_examples: 10
download_size: 4429
dataset_size: 3199
---
# Dataset Card for "story_1_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mohsen2/snappfood | 2023-09-18T18:31:12.000Z | [
"region:us"
] | mohsen2 | null | null | null | 0 | 11 | Entry not found |
Dan-Stefan/conveyors_test | 2023-10-10T06:36:12.000Z | [
"region:us"
] | Dan-Stefan | null | null | null | 0 | 11 | Entry not found |
Hieu-Pham/Instructions | 2023-09-19T13:43:12.000Z | [
"region:us"
] | Hieu-Pham | null | null | null | 0 | 11 | Entry not found |
strumber/LetsMOD-Gen-Dataset-V-1 | 2023-09-19T16:43:32.000Z | [
"region:us"
] | strumber | null | null | null | 0 | 11 | Entry not found |
factored/saleswiz_gpt_is_relevant | 2023-09-19T22:42:07.000Z | [
"region:us"
] | factored | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 275081
num_examples: 977
download_size: 176589
dataset_size: 275081
---
# Dataset Card for "saleswiz_gpt_is_relevant"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChanHE/score_112_qa | 2023-10-01T16:58:15.000Z | [
"region:us"
] | ChanHE | null | null | null | 0 | 11 | Entry not found |
Falah/arabic_glamour_prompts | 2023-09-20T07:53:14.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1949534
num_examples: 10000
download_size: 328987
dataset_size: 1949534
---
# Dataset Card for "arabic_glamour_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jackmax5/data | 2023-09-20T10:42:11.000Z | [
"license:gpl-2.0",
"region:us"
] | Jackmax5 | null | null | null | 0 | 11 | ---
license: gpl-2.0
---
|
Sehaj/robot_commands_2 | 2023-09-20T10:26:51.000Z | [
"license:mit",
"region:us"
] | Sehaj | null | null | null | 1 | 11 | ---
license: mit
---
|
alayaran/bodo-news-headline | 2023-09-20T13:54:01.000Z | [
"license:mit",
"region:us"
] | alayaran | null | null | null | 0 | 11 | ---
license: mit
dataset_info:
features:
- name: text
dtype: string
- name: headline
dtype: string
splits:
- name: train
num_bytes: 9875669
num_examples: 2569
- name: validation
num_bytes: 441930
num_examples: 100
- name: test
num_bytes: 434653
num_examples: 100
download_size: 3755546
dataset_size: 10752252
---
|
IsaacJu666/pokemon | 2023-09-21T21:15:32.000Z | [
"region:us"
] | IsaacJu666 | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: text_blip
dtype: string
splits:
- name: train
num_bytes: 56583875.0
num_examples: 833
download_size: 50947153
dataset_size: 56583875.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pokemon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ahmed-masry/ChartQA | 2023-09-21T03:31:50.000Z | [
"license:gpl-3.0",
"region:us"
] | ahmed-masry | null | null | null | 0 | 11 | ---
license: gpl-3.0
---
|
josedanielaromi/FOMC20070321 | 2023-09-21T11:24:18.000Z | [
"region:us"
] | josedanielaromi | null | null | null | 0 | 11 | Entry not found |
Nicolas-BZRD/English_French_Webpages_Scraped_Translated | 2023-09-21T14:29:04.000Z | [
"task_categories:translation",
"size_categories:10M<n<100M",
"language:en",
"language:fr",
"license:odbl",
"webpages",
"parallel",
"parallel data",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 11 | ---
language:
- en
- fr
license: odbl
size_categories:
- 10M<n<100M
task_categories:
- translation
tags:
- webpages
- parallel
- parallel data
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: en
dtype: string
- name: fr
dtype: string
splits:
- name: train
num_bytes: 6811772380
num_examples: 17161263
download_size: 640497280
dataset_size: 6811772380
---
# English French Webpages Scraped Translated
### Dataset Summary
French/English parallel texts for training translation models. Over 17.1 million sentences in French and English. Dataset created by Chris Callison-Burch, who crawled millions of web pages and then used a set of simple heuristics to transform French URLs onto English URLs, and assumed that these documents are translations of each other. This is the main dataset of Workshop on Statistical Machine Translation (WML) 2015 Dataset that can be used for Machine Translation and Language Models. Refer to the paper here: http://www.statmt.org/wmt15/pdf/WMT01.pdf
### Post-process
This dataset has been post-processed to remove all duplicates, empty fields and phrases containing less than 5 words.
### Original Dataset Citation
```
@InProceedings{bojar-EtAl:2015:WMT,
author = {Bojar, Ond\v{r}ej and Chatterjee, Rajen and Federmann, Christian and Haddow, Barry and Huck, Matthias and Hokamp, Chris and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Scarton, Carolina and Specia, Lucia and Turchi, Marco},
title = {Findings of the 2015 Workshop on Statistical Machine Translation},
booktitle = {Proceedings of the Tenth Workshop on Statistical Machine Translation},
month = {September},
year = {2015},
address = {Lisbon, Portugal},
publisher = {Association for Computational Linguistics},
pages = {1--46},
url = {http://aclweb.org/anthology/W15-3001}
}
``` |
neelblabla/enron_labeled_email-llama2-7b_finetuning | 2023-09-21T16:37:45.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | neelblabla | null | null | null | 0 | 11 | ---
task_categories:
- text-classification
language:
- en
pretty_name: enron_labeled_prompts
size_categories:
- 1K<n<10K
--- |
Falah/village4kids_0_prompts | 2023-09-22T07:31:50.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 2702
num_examples: 10
download_size: 4036
dataset_size: 2702
---
# Dataset Card for "village4kids_0_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
iohadrubin/top_terms_subtopics | 2023-09-24T16:47:08.000Z | [
"region:us"
] | iohadrubin | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: value
dtype: string
- name: cluster
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3330605
num_examples: 4096
download_size: 0
dataset_size: 3330605
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "top_terms_subtopics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
IceMasterT/BTC-Data-1Hour-2018-2023 | 2023-09-29T15:48:10.000Z | [
"task_categories:token-classification",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"finance",
"region:us"
] | IceMasterT | null | null | null | 1 | 11 | ---
license: mit
task_categories:
- token-classification
- text-classification
language:
- en
tags:
- finance
pretty_name: Bitcoin Data 1 Hour 2018-2023
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
jayashri710/dress_data | 2023-09-25T10:50:59.000Z | [
"region:us"
] | jayashri710 | null | null | null | 0 | 11 | Entry not found |
lhallee/uniref50_50-512 | 2023-09-26T19:14:45.000Z | [
"region:us"
] | lhallee | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: uniref
dtype: string
splits:
- name: train
num_bytes: 10696656442
num_examples: 51521691
download_size: 10582703793
dataset_size: 10696656442
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "uniref50_50-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adalbertojunior/substance_quantity | 2023-09-26T22:41:55.000Z | [
"region:us"
] | adalbertojunior | null | null | null | 0 | 11 | Entry not found |
mayank1307/pdp_tokens | 2023-09-27T13:27:07.000Z | [
"region:us"
] | mayank1307 | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2439583
num_examples: 9105
download_size: 560074
dataset_size: 2439583
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pdp_tokens"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tuxmx/nfl_bets_scores | 2023-09-28T03:57:29.000Z | [
"region:us"
] | tuxmx | null | null | null | 0 | 11 | Entry not found |
PurCL/marinda-type-inference-debuginfo-only-O2-shuffle | 2023-09-28T05:10:26.000Z | [
"region:us"
] | PurCL | null | null | null | 0 | 11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: metadata
struct:
- name: binary_name
dtype: string
- name: function_addr
dtype: int64
- name: function_name
dtype: string
- name: project_name
dtype: string
- name: code_w_type
dtype: string
- name: code
dtype: string
- name: data_dep
dtype: string
splits:
- name: train
num_bytes: 204117739.7069311
num_examples: 29631
- name: test
num_bytes: 22684341.293068886
num_examples: 3293
download_size: 56107280
dataset_size: 226802081.0
---
# Dataset Card for "marinda-type-inference-debuginfo-only-O2-shuffle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mHossain/idiom_generation_v1 | 2023-09-28T07:22:10.000Z | [
"region:us"
] | mHossain | null | null | null | 0 | 11 | Entry not found |
pavithrav/emotion | 2023-09-28T09:43:56.000Z | [
"region:us"
] | pavithrav | null | null | null | 0 | 11 | Entry not found |
kyle-mirich/bible_bot_beliefs_test_v01 | 2023-10-09T22:47:47.000Z | [
"license:mit",
"region:us"
] | kyle-mirich | null | null | null | 0 | 11 | ---
license: mit
---
|
rexionmars/llama2-evaluator-assistant | 2023-09-30T18:02:19.000Z | [
"region:us"
] | rexionmars | null | null | null | 0 | 11 | Entry not found |
momo22/eng2nep | 2023-10-02T07:15:34.000Z | [
"task_categories:translation",
"size_categories:1M<n<10M",
"language:en",
"language:ne",
"license:mit",
"region:us"
] | momo22 | null | null | null | 0 | 11 | ---
license: mit
task_categories:
- translation
language:
- en
- ne
size_categories:
- 1M<n<10M
--- |
madaanpulkit/opus_eng_hin_pan | 2023-10-02T05:50:52.000Z | [
"region:us"
] | madaanpulkit | null | null | null | 0 | 11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sent
dtype: string
- name: lang
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 159097285
num_examples: 1283230
- name: validation
num_bytes: 770267
num_examples: 8000
- name: test
num_bytes: 790471
num_examples: 8000
download_size: 71739889
dataset_size: 160658023
---
# Dataset Card for "opus_eng_hin_pan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FelixdoingAI/IP2P-hiddenwm-200 | 2023-10-03T14:09:13.000Z | [
"region:us"
] | FelixdoingAI | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
- name: adversarial_image
dtype: image
splits:
- name: train
num_bytes: 104484241.0
num_examples: 200
download_size: 104481659
dataset_size: 104484241.0
---
# Dataset Card for "IP2P-hiddenwm-200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk/rl-bench-test-crowdsource | 2023-10-03T22:05:47.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: user_name
dtype: string
- name: bot_name
dtype: string
- name: memory
dtype: string
- name: prompt
dtype: string
- name: chat_history
list:
- name: message
dtype: string
- name: sender
dtype: string
splits:
- name: train
num_bytes: 292785
num_examples: 200
download_size: 190141
dataset_size: 292785
---
# Dataset Card for "rl-bench-test-crowdsource"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Musa22/llma | 2023-10-04T09:59:01.000Z | [
"region:us"
] | Musa22 | null | null | null | 0 | 11 | Entry not found |
Intuit-GenSRF/jigsaw-unintende-bias | 2023-10-04T23:33:59.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 611338216
num_examples: 1999516
download_size: 417071482
dataset_size: 611338216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jigsaw-unintended-biased"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/hackathon-somos-nlp-2023-suicide-comments-es | 2023-10-05T00:55:52.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 942250
num_examples: 10050
download_size: 611736
dataset_size: 942250
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hackathon-somos-nlp-2023-suicide-comments-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/tweet-eval-offensive | 2023-10-05T01:08:13.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 1651630
num_examples: 11916
download_size: 1020434
dataset_size: 1651630
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tweet_eval-offensive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
philschmid/markdown-documentation-transformers | 2023-10-05T13:42:59.000Z | [
"license:apache-2.0",
"region:us"
] | philschmid | null | null | null | 0 | 11 | ---
license: apache-2.0
---
# Hugging Face Transformers documentation as markdown dataset
This dataset was created using [Clipper.js](https://github.com/philschmid/clipper.js). Clipper is a Node.js command line tool that allows you to easily clip content from web pages and convert it to Markdown. It uses Mozilla's Readability library and Turndown under the hood to parse web page content and convert it to Markdown.
This dataset can be used to create RAG applications, which want to use the transformers documentation.
Example document: https://huggingface.co/docs/transformers/peft
```
# Load adapters with 🤗 PEFT
[Parameter-Efficient Fine Tuning (PEFT)](https://huggingface.co/blog/peft) methods freeze the pretrained model parameters during fine-tuning and add a small number of trainable parameters (the adapters) on top of it. The adapters are trained to learn task-specific information. This approach has been shown to be very memory-efficient with lower compute usage while producing results comparable to a fully fine-tuned model.
Adapters trained with PEFT are also usually an order of magnitude smaller than the full model, making it convenient to share, store, and load them.

The adapter weights for a OPTForCausalLM model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB.
If you’re interested in learning more about the 🤗 PEFT library, check out the [documentation](https://huggingface.co/docs/peft/index).
## Setup
Get started by installing 🤗 PEFT:
If you want to try out the brand new features, you might be interested in installing the library from source:
....
``` |
has84/test | 2023-10-06T07:52:19.000Z | [
"license:mit",
"region:us"
] | has84 | null | null | null | 0 | 11 | ---
license: mit
---
|
DopeorNope/Eng_Kor_COT_combined | 2023-10-06T06:38:17.000Z | [
"region:us"
] | DopeorNope | null | null | null | 0 | 11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 36071886
num_examples: 27085
download_size: 19831176
dataset_size: 36071886
---
# Dataset Card for "Eng_Kor_COT_combined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chiualfredo/oil_origin | 2023-10-07T04:59:08.000Z | [
"region:us"
] | chiualfredo | null | null | null | 0 | 11 | Entry not found |
sidthip/testquiz | 2023-10-07T10:20:05.000Z | [
"region:us"
] | sidthip | null | null | null | 0 | 11 | Entry not found |
haseong8012/korean-child-command-voice_sample | 2023-10-07T11:34:08.000Z | [
"region:us"
] | haseong8012 | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: text
dtype: string
- name: audio_data
sequence: float32
splits:
- name: train
num_bytes: 1172309014
num_examples: 1210
download_size: 414232001
dataset_size: 1172309014
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "korean-child-command-voice_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
towhid/aesir-train-420 | 2023-10-07T18:10:39.000Z | [
"region:us"
] | towhid | null | null | null | 0 | 11 | Entry not found |
anz2/NASA_OSDR | 2023-10-10T23:15:23.000Z | [
"license:apache-2.0",
"region:us"
] | anz2 | This contains aggregated data from NASA OSDR s3 bucket.
It contains up to 451 experiments and tables of samples from those experiments. | NASA Space Biology Open Science Data Repository (OSDR) was accessed on 11.10.2023 from https://registry.opendata.aws/nasa-osdr. | null | 0 | 11 | ---
license: apache-2.0
configs:
- config_name: experiments
data_files: "data/train/experiments.csv"
sep: ","
default: true
- config_name: samples
data_files: "data/train/samples.csv"
sep: ","
---
|
andreabac3/truthful_qa_multiple_choice_ita | 2023-10-08T14:01:36.000Z | [
"region:us"
] | andreabac3 | null | null | null | 0 | 11 | ---
dataset_info:
features:
- name: question
dtype: string
- name: mc1_targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int32
- name: mc2_targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int32
splits:
- name: validation
num_bytes: 666828
num_examples: 817
download_size: 305337
dataset_size: 666828
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "truthful_qa_multiple_choice_ita"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
darcycao/en2zh_specaildataset | 2023-10-09T09:45:26.000Z | [
"region:us"
] | darcycao | null | null | null | 0 | 11 | Entry not found |
dmrau/cqadupstack-android | 2023-10-09T12:39:30.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 11 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 47953
num_examples: 699
- name: corpus
num_bytes: 12840959
num_examples: 22998
download_size: 7657118
dataset_size: 12888912
---
# Dataset Card for "cqadupstack-android"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ccaligned_multilingual | 2022-11-03T16:31:56.000Z | [
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"size_categories:10M<n<100M",
"sourc... | null | CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French). | @inproceedings{elkishky_ccaligned_2020,
author = {El-Kishky, Ahmed and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Koehn, Philipp},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)},
month = {November},
title = {{CCAligned}: A Massive Collection of Cross-lingual Web-Document Pairs},
year = {2020}
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.480",
doi = "10.18653/v1/2020.emnlp-main.480",
pages = "5960--5969"
} | null | 3 | 10 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- ak
- am
- ar
- as
- ay
- az
- be
- bg
- bm
- bn
- br
- bs
- ca
- ceb
- ckb
- cs
- cy
- de
- dv
- el
- eo
- es
- fa
- ff
- fi
- fo
- fr
- fy
- ga
- gl
- gn
- gu
- he
- hi
- hr
- hu
- id
- ig
- is
- it
- iu
- ja
- ka
- kac
- kg
- kk
- km
- kn
- ko
- ku
- ky
- la
- lg
- li
- ln
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- 'no'
- nso
- ny
- om
- or
- pa
- pl
- ps
- pt
- rm
- ro
- ru
- rw
- sc
- sd
- se
- shn
- si
- sk
- sl
- sn
- so
- sq
- sr
- ss
- st
- su
- sv
- sw
- syc
- szl
- ta
- te
- tg
- th
- ti
- tl
- tn
- tr
- ts
- tt
- ug
- uk
- ur
- uz
- ve
- vi
- war
- wo
- xh
- yi
- yo
- zgh
- zh
- zu
- zza
license:
- unknown
multilinguality:
- translation
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
paperswithcode_id: ccaligned
pretty_name: CCAligned
dataset_info:
- config_name: documents-zz_TR
features:
- name: Domain
dtype: string
- name: Source_URL
dtype: string
- name: Target_URL
dtype: string
- name: translation
dtype:
translation:
languages:
- en_XX
- zz_TR
splits:
- name: train
num_bytes: 641412
num_examples: 41
download_size: 125488
dataset_size: 641412
- config_name: sentences-zz_TR
features:
- name: translation
dtype:
translation:
languages:
- en_XX
- zz_TR
- name: LASER_similarity
dtype: float32
splits:
- name: train
num_bytes: 4056
num_examples: 34
download_size: 1428
dataset_size: 4056
- config_name: documents-tz_MA
features:
- name: Domain
dtype: string
- name: Source_URL
dtype: string
- name: Target_URL
dtype: string
- name: translation
dtype:
translation:
languages:
- en_XX
- tz_MA
splits:
- name: train
num_bytes: 51782
num_examples: 4
download_size: 11996
dataset_size: 51782
- config_name: sentences-tz_MA
features:
- name: translation
dtype:
translation:
languages:
- en_XX
- tz_MA
- name: LASER_similarity
dtype: float32
splits:
- name: train
num_bytes: 6256
num_examples: 33
download_size: 2420
dataset_size: 6256
- config_name: documents-ak_GH
features:
- name: Domain
dtype: string
- name: Source_URL
dtype: string
- name: Target_URL
dtype: string
- name: translation
dtype:
translation:
languages:
- en_XX
- ak_GH
splits:
- name: train
num_bytes: 10738312
num_examples: 249
download_size: 399236
dataset_size: 10738312
- config_name: sentences-ak_GH
features:
- name: translation
dtype:
translation:
languages:
- en_XX
- ak_GH
- name: LASER_similarity
dtype: float32
splits:
- name: train
num_bytes: 50110
num_examples: 478
download_size: 17636
dataset_size: 50110
---
# Dataset Card for ccaligned_multilingual
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.statmt.org/cc-aligned/
- **Repository:** [Needs More Information]
- **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.480.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French). This corpus was created from 68 Commoncrawl Snapshots.
To load a language which isn't part of the config, all you need to do is specify the language code. You can find the valid languages in http://www.statmt.org/cc-aligned/ E.g.
```
dataset = load_dataset("ccaligned_multilingual", language_code="fr_XX", type="documents")
```
or
```
dataset = load_dataset("ccaligned_multilingual", language_code="fr_XX", type="sentences")
```
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The text in the dataset is in (137) multiple languages aligned with english.
## Dataset Structure
### Data Instances
An instance of `documents` type for language `ak_GH`:
```
{'Domain': 'islamhouse.com', 'Source_URL': 'https://islamhouse.com/en/audios/373088/', 'Target_URL': 'https://islamhouse.com/ak/audios/373088/', 'translation': {'ak_GH': "Ntwatiaa / wɔabɔ no tɔfa wɔ mu no te ase ma Umrah - Arab kasa|Islamhouse.com|Follow us:|facebook|twitter|taepe|Titles All|Fie wibesite|kasa nyina|Buukuu edi adanse ma prente|Nhyehyɛmu|Nyim/sua Islam|Curriculums|Nyina ndeɛma|Nyina ndeɛma (295)|Buukuu/ nwoma (2)|sini / muuvi (31)|ɔdio (262)|Aɛn websideNew!|Kɔ wura kramosom mu seisei|Ebio|figa/kaasɛ|Farebae|AKAkan|Kratafa titriw|kasa interface( anyimu) : Akan|Kasa ma no mu-nsɛm : Arab kasa|ɔdio|Ntwatiaa / wɔabɔ no tɔfa wɔ mu no te ase ma Umrah|play|pause|stop|mute|unmute|max volume|Kasakyerɛ ni :|Farebae:|17 / 11 / 1432 , 15/10/2011|Nhyehyɛmu:|Jurisprudence/ Esum Nimdea|Som|Hajj na Umrah|Jurisprudence/ Esum Nimdea|Som|Hajj na Umrah|Mmira ma Hajj na Umrah|nkyerɛmu|kasamu /sɛntɛns ma te ase na Umrah wɔ ... mu no hann ma no Quran na Sunnah na te ase ma no nana na no kasamu /sɛntɛns ma bi ma no emerging yi adu obusuani|Akenkane we ye di ko kasa bi su (36)|Afar - Qafár afa|Akan|Amhari ne - አማርኛ|Arab kasa - عربي|Assamese - অসমীয়া|Bengali - বাংলা|Maldive - ދިވެހި|Greek - Ελληνικά|English ( brofo kasa) - English|Persian - فارسی|Fula - pulla|French - Français|Hausa - Hausa|Kurdish - كوردی سۆرانی|Uganda ne - Oluganda|Mandinka - Mandinko|Malayalam - മലയാളം|Nepali - नेपाली|Portuguese - Português|Russian - Русский|Sango - Sango|Sinhalese - සිංහල|Somali - Soomaali|Albania ne - Shqip|Swahili - Kiswahili|Telugu - తెలుగు ప్రజలు|Tajik - Тоҷикӣ|Thai - ไทย|Tagalog - Tagalog|Turkish - Türkçe|Uyghur - ئۇيغۇرچە|Urdu - اردو|Uzbeck ne - Ўзбек тили|Vietnamese - Việt Nam|Wolof - Wolof|Chine ne - 中文|Soma kɔ bi kyerɛ adwen kɔ wɛb ebusuapanin|Soma kɔ ne kɔ hom adamfo|Soma kɔ bi kyerɛ adwen kɔ wɛb ebusuapanin|Nsɔwso fael (1)|1|الموجز في فقه العمرة|MP3 14.7 MB|Enoumah ebatahu|Rituals/Esom ajomadie ewu Hajji mmire .. 1434 AH [01] no fapemso Enum|Fiidbak/ Ye hiya wu jun kyiri|Lenke de yɛe|kɔntakt yɛn|Aɛn webside|Qura'an Kro kronkrom|Balagh|wɔ mfinimfin Dowload faele|Yɛ atuu bra Islam mu afei|Tsin de yɛe ewu|Anaa bomu/combine hɛn melin liste|© Islamhouse Website/ Islam dan webi site|×|×|Yi mu kasa|", 'en_XX': 'SUMMARY in the jurisprudence of Umrah - Arabic - Abdul Aziz Bin Marzooq Al-Turaifi|Islamhouse.com|Follow us:|facebook|twitter|QuranEnc.com|HadeethEnc.com|Type|Titles All|Home Page|All Languages|Categories|Know about Islam|All items|All items (4057)|Books (701)|Articles (548)|Fatawa (370)|Videos (1853)|Audios (416)|Posters (98)|Greeting cards (22)|Favorites (25)|Applications (21)|Desktop Applications (3)|To convert to Islam now !|More|Figures|Sources|Curriculums|Our Services|QuranEnc.com|HadeethEnc.com|ENEnglish|Main Page|Interface Language : English|Language of the content : Arabic|Audios|تعريب عنوان المادة|SUMMARY in the jurisprudence of Umrah|play|pause|stop|mute|unmute|max volume|Lecturer : Abdul Aziz Bin Marzooq Al-Turaifi|Sources:|AlRaya Islamic Recoding in Riyadh|17 / 11 / 1432 , 15/10/2011|Categories:|Islamic Fiqh|Fiqh of Worship|Hajj and Umrah|Islamic Fiqh|Fiqh of Worship|Hajj and Umrah|Pilgrimage and Umrah|Description|SUMMARY in jurisprudence of Umrah: A statement of jurisprudence and Umrah in the light of the Quran and Sunnah and understanding of the Ancestors and the statement of some of the emerging issues related to them.|This page translated into (36)|Afar - Qafár afa|Akane - Akan|Amharic - አማርኛ|Arabic - عربي|Assamese - অসমীয়া|Bengali - বাংলা|Maldivi - ދިވެހި|Greek - Ελληνικά|English|Persian - فارسی|Fula - pulla|French - Français|Hausa - Hausa|kurdish - كوردی سۆرانی|Ugandan - Oluganda|Mandinka - Mandinko|Malayalam - മലയാളം|Nepali - नेपाली|Portuguese - Português|Russian - Русский|Sango - Yanga ti Sango|Sinhalese - සිංහල|Somali - Soomaali|Albanian - Shqip|Swahili - Kiswahili|Telugu - తెలుగు|Tajik - Тоҷикӣ|Thai - ไทย|Tagalog - Tagalog|Turkish - Türkçe|Uyghur - ئۇيغۇرچە|Urdu - اردو|Uzbek - Ўзбек тили|Vietnamese - Việt Nam|Wolof - Wolof|Chinese - 中文|Send a comment to Webmaster|Send to a friend?|Send a comment to Webmaster|Attachments (1)|1|الموجز في فقه العمرة|MP3 14.7 MB|The relevant Material|The rituals of the pilgrimage season .. 1434 AH [ 01] the fifth pillar|The Quality of the Accepted Hajj (Piligrimage) and Its Limitations|Easy Path to the Rules of the Rites of Hajj|A Call to the Pilgrims of the Scared House of Allah|More|feedback|Important links|Contact us|Privacy policy|Islam Q&A|Learning Arabic Language|About Us|Convert To Islam|Noble Quran encyclopedia|IslamHouse.com Reader|Encyclopedia of Translated Prophetic Hadiths|Our Services|The Quran|Balagh|Center for downloading files|To embrace Islam now...|Follow us through|Or join our mailing list.|© Islamhouse Website|×|×|Choose language|'}}
```
An instance of `sentences` type for language `ak_GH`:
```
{'LASER_similarity': 1.4549942016601562, 'translation': {'ak_GH': 'Salah (nyamefere) ye Mmerebeia', 'en_XX': 'What he dislikes when fasting (10)'}}
```
### Data Fields
For `documents` type:
- `Domain`: a `string` feature containing the domain.
- `Source_URL`: a `string` feature containing the source URL.
- `Target_URL`: a `string` feature containing the target URL.
- `translation`: a `dictionary` feature with two keys :
- `en_XX`: a `string` feature containing the content in English.
- <language_code>: a `string` feature containing the content in the `language_code` specified.
For `sentences` type:
- `LASER_similarity`: a `float32` feature representing the LASER similarity score.
- `translation`: a `dictionary` feature with two keys :
- `en_XX`: a `string` feature containing the content in English.
- <language_code>: a `string` feature containing the content in the `language_code` specified.
### Data Splits
Split sizes of some small configurations:
| name |train|
|----------|----:|
|documents-zz_TR|41|
|sentences-zz_TR|34|
|documents-tz_MA|4|
|sentences-tz_MA|33|
|documents-ak_GH|249|
|sentences-ak_GH|478|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{elkishky_ccaligned_2020,
author = {El-Kishky, Ahmed and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Koehn, Philipp},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)},
month = {November},
title = {{CCAligned}: A Massive Collection of Cross-lingual Web-Document Pairs},
year = {2020}
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.480",
doi = "10.18653/v1/2020.emnlp-main.480",
pages = "5960--5969"
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset. |
conceptnet5 | 2023-06-01T14:59:50.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10M<n<100M",
"size_categories:1M<n<10M",
"sou... | null | This dataset is designed to provide training data
for common sense relationships pulls together from various sources.
The dataset is multi-lingual. See langauge codes and language info
here: https://github.com/commonsense/conceptnet5/wiki/Languages
This dataset provides an interface for the conceptnet5 csv file, and
some (but not all) of the raw text data used to build conceptnet5:
omcsnet_sentences_free.txt, and omcsnet_sentences_more.txt.
One use of this dataset would be to learn to extract the conceptnet
relationship from the omcsnet sentences.
Conceptnet5 has 34,074,917 relationships. Of those relationships,
there are 2,176,099 surface text sentences related to those 2M
entries.
omcsnet_sentences_free has 898,161 lines. omcsnet_sentences_more has
2,001,736 lines.
Original downloads are available here
https://github.com/commonsense/conceptnet5/wiki/Downloads. For more
information, see: https://github.com/commonsense/conceptnet5/wiki
The omcsnet data comes with the following warning from the authors of
the above site: Remember: this data comes from various forms of
crowdsourcing. Sentences in these files are not necessarily true,
useful, or appropriate. | \
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. "ConceptNet 5.5: An Open Multilingual Graph of General Knowledge." In proceedings of AAAI 31.
} | null | 13 | 10 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- de
- en
- es
- fr
- it
- ja
- nl
- pt
- ru
- zh
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: conceptnet
pretty_name: Conceptnet5
dataset_info:
- config_name: conceptnet5
features:
- name: sentence
dtype: string
- name: full_rel
dtype: string
- name: rel
dtype: string
- name: arg1
dtype: string
- name: arg2
dtype: string
- name: lang
dtype: string
- name: extra_info
dtype: string
- name: weight
dtype: float32
splits:
- name: train
num_bytes: 11493868180
num_examples: 34074917
download_size: 497963447
dataset_size: 11493868180
- config_name: omcs_sentences_free
features:
- name: sentence
dtype: string
- name: raw_data
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 174811310
num_examples: 898160
download_size: 104247648
dataset_size: 174811310
- config_name: omcs_sentences_more
features:
- name: sentence
dtype: string
- name: raw_data
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 341424279
num_examples: 2001735
download_size: 209776958
dataset_size: 341424279
config_names:
- conceptnet5
- omcs_sentences_free
- omcs_sentences_more
---
# Dataset Card for Conceptnet5
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
https://github.com/commonsense/conceptnet5/wiki
- **Repository:**
https://github.com/commonsense/conceptnet5/wiki
- **Paper:**
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. "ConceptNet 5.5: An Open Multilingual Graph of General Knowledge." In proceedings of AAAI 31.o
### Dataset Summary
ConceptNet is a multilingual knowledge base, representing words and
phrases that people use and the common-sense relationships between
them. The knowledge in ConceptNet is collected from a variety of
resources, including crowd-sourced resources (such as Wiktionary and
Open Mind Common Sense), games with a purpose (such as Verbosity and
nadya.jp), and expert-created resources (such as WordNet and JMDict).
You can browse what ConceptNet knows at http://conceptnet.io.
This dataset is designed to provide training data
for common sense relationships pulls together from various sources.
The dataset is multi-lingual. See langauge codes and language info
here: https://github.com/commonsense/conceptnet5/wiki/Languages
This dataset provides an interface for the conceptnet5 csv file, and
some (but not all) of the raw text data used to build conceptnet5:
omcsnet_sentences_free.txt, and omcsnet_sentences_more.txt.
One use of this dataset would be to learn to extract the conceptnet
relationship from the omcsnet sentences.
Conceptnet5 has 34,074,917 relationships. Of those relationships,
there are 2,176,099 surface text sentences related to those 2M
entries.
omcsnet_sentences_free has 898,161 lines. omcsnet_sentences_more has
2,001,736 lines.
Original downloads are available here
https://github.com/commonsense/conceptnet5/wiki/Downloads. For more
information, see: https://github.com/commonsense/conceptnet5/wiki
The omcsnet data comes with the following warning from the authors of
the above site:
Remember: this data comes from various forms of
crowdsourcing. Sentences in these files are not necessarily true,
useful, or appropriate.
### Languages
en, fr, it, de, es, ru, pt, ja, nl, zh and others
## Dataset Structure
### Data Instances
There are three configurations for the dataset: conceptnet5, omcs_sentences_free, omcs_sentences_more.
Conceptnet5 defines:
``
{
'sentence': ...,
'full_rel': ...,
'rel': ...,
'arg1': ...,
'arg2': ...,
'lang': ...,
'extra_info': ...
'weight': ...
}
``
The omcs text defines:
``
{
'sentence': ...,
'raw_data': ...
'weight': ...
}
``
### Data Fields
For conceptnet5 configurations:
* full_rel: the full relationship. e.g., /a/[/r/Antonym/,/c/en/able/,/c/en/cane/]
* rel: the binary relationship. e.g., /r/Antonym
* arg1: the first argument to the binary relationship. e.g., /c/en/able
* arg2: the second argument to the binary relationship. e.g., /c/en/cane
* lang: the language code. e.g., en, fr, etc. If the arg1 and arg2 are two different languages, then the form os lang1/lang2.
* extra_info: a string that includes json data that has the dataset name, license type (mostly cc-4.0), contributor, etc. e.g., : {"dataset": "/d/verbosity", "license": "cc:by/4.0", "sources": [{"contributor": "/s/resource/verbosity"}], "surfaceEnd": "cane", "surfaceStart": "able", "surfaceText": "[[able]] is the opposite of [[cane]]", "weight": 0.299}
* sentence: the sentence from which the relationship was extracted, if one exists, with brackets around the arg1 and arg2. e.g., [[able]] is the opposite of [[cane]]
* weight: the weight assigned by the curators or automatically to the relationship, between 1.0-0.0, higher being more certain.
For the omcs text configurations:
* sentence: the raw sentence
* raw_data: the raw tab seperated data of the form, id, text, curator_id, created_on, lanugage_id, activity_id, and score. Most of this information was tied to older systems for entering the data os was not partsed into fields for the dataset. e.g., 1237278 someone can be at catch 10805 2006-11-14 17:56:49.70872-05 en 27 1
* lang: the language code
### Data Splits
There are no splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created over many years for research in common sense reasoning.
### Source Data
#### Initial Data Collection and Normalization
Started as the Open Mind Common Sense project at MIT Media Lab in 1999. See https://en.wikipedia.org/wiki/Open_Mind_Common_Sense
#### Who are the source language producers?
Crowd Sourced
### Annotations
#### Annotation process
Crowd Source template text, games, etc.
#### Who are the annotators?
Crowd sourced.
### Personal and Sensitive Information
Unkown, but likely there are names of famous individuals.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to help machines understand common sense.
### Discussion of Biases
See the website and paper for efforts to minimize data bias, but
please note that omcs_sentences_free, omcs_sentences_more are raw data
entered by users and may very well have biased data.
### Other Known Limitations
While the relationship dataset is large, the amount of actual sentences is limited.
## Additional Information
### Dataset Curators
The authors of https://github.com/commonsense/conceptnet5/wiki and Luminoso.
### Licensing Information
This work includes data from ConceptNet 5, which was compiled by the
Commonsense Computing Initiative. ConceptNet 5 is freely available under
the Creative Commons Attribution-ShareAlike license (CC BY SA 3.0) from
http://conceptnet.io.
The included data was created by contributors to Commonsense Computing
projects, contributors to Wikimedia projects, DBPedia, OpenCyc, Games
with a Purpose, Princeton University's WordNet, Francis Bond's Open
Multilingual WordNet, and Jim Breen's JMDict.
Credits and acknowledgements
ConceptNet has been developed by:
The MIT Media Lab, through various groups at different times:
Commonsense Computing
Software Agents
Digital Intuition
The Commonsense Computing Initiative, a worldwide collaboration with contributions from:
National Taiwan University
Universidade Federal de São Carlos
Hokkaido University
Tilburg University
Nihon Unisys Labs
Dentsu Inc.
Kyoto University
Yahoo Research Japan
Luminoso Technologies, Inc.
Significant amounts of data were imported from:
WordNet, a project of Princeton University
Open Multilingual WordNet, compiled by Francis Bond and Kyonghee Paik
Wikipedia and Wiktionary, collaborative projects of the Wikimedia Foundation
Luis von Ahn's "Games with a Purpose"
JMDict, compiled by Jim Breen
CC-CEDict, by MDBG
The Unicode CLDR
DBPedia
Here is a short, incomplete list of people who have made significant contributions to the development of ConceptNet as a data resource, roughly in order of appearance:
Push Singh
Catherine Havasi
Hugo Liu
Hyemin Chung
Robyn Speer
Ken Arnold
Yen-Ling Kuo
Joshua Chin
Joanna Lowry-Duda
Robert Beaudoin
Naoki Otani
Vanya Cohen
Licenses for included resources
Commonsense Computing
The Commonsense Computing project originated at the MIT Media Lab and expanded worldwide. Tens of thousands of contributors have taken some time to teach facts to computers. Their pseudonyms can be found in the "sources" list found in ConceptNet's raw data and in its API.
Games with a Purpose
Data collected from Verbosity, one of the CMU "Games with a Purpose", is used and released under ConceptNet's license, by permission from Luis von Ahn and Harshit Surana.
Verbosity players are anonymous, so in the "sources" list, data from Verbosity is simply credited to the pseudonym "verbosity".
Wikimedia projects
ConceptNet uses data directly from Wiktionary, the free dictionary. It also uses data from Wikipedia, the free encyclopedia via DBPedia.
Wiktionary and Wikipedia are collaborative projects, authored by their respective online communities. They are currently released under the Creative Commons Attribution-ShareAlike license.
Wikimedia encourages giving attribution by providing links to the hosted pages that the data came from, and DBPedia asks for the same thing in turn. In addition to crediting the assertions that came from Wiktionary and DBPedia, we also provide "ExternalURL" edges pointing to the page that they came from. For example, the term /c/de/sprache has an ExternalURL link pointing to http://en.wiktionary.org/wiki/Sprache. Its list of individual contributors can be seen by following its "History" link.
The URLs of links to DBPedia are the same as the resource names that DBPedia uses, encouraging interoperability with their linked data.
WordNet
WordNet is available under an unencumbered license: see http://wordnet.princeton.edu/wordnet/license/. Its text is reproduced below:
WordNet Release 3.0
This software and database is being provided to you, the LICENSEE, by Princeton University under the following license. By obtaining, using and/or copying this software and database, you agree that you have read, understood, and will comply with these terms and conditions.:
Permission to use, copy, modify and distribute this software and database and its documentation for any purpose and without fee or royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the software, database and documentation, including modifications that you make for internal use or for distribution.
WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved.
THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.
The name of Princeton University or Princeton may not be used in advertising or publicity pertaining to distribution of the software and/or database. Title to copyright in this software, database and any associated documentation shall at all times remain with Princeton University and LICENSEE agrees to preserve same.
Open Multilingual WordNet
Open Multilingual WordNet was compiled by Francis Bond, Kyonghee Paik, and Ryan Foster, from data provided by many multilingual WordNet projects. Here is the complete list of references to the projects that created the data.
### Citation Information
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. "ConceptNet 5.5: An Open Multilingual Graph of General Knowledge." In proceedings of AAAI 31.
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. |
dialog_re | 2022-11-18T19:58:15.000Z | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en"... | null | DialogRE is the first human-annotated dialogue based relation extraction (RE) dataset aiming
to support the prediction of relation(s) between two arguments that appear in a dialogue.
The dataset annotates all occurrences of 36 possible relation types that exist between pairs
of arguments in the 1,788 dialogues originating from the complete transcripts of Friends. | @inproceedings{yu2020dialogue,
title={Dialogue-Based Relation Extraction},
author={Yu, Dian and Sun, Kai and Cardie, Claire and Yu, Dong},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020},
url={https://arxiv.org/abs/2004.08056v1}
} | null | 7 | 10 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- other
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: dialogre
pretty_name: DialogRE
tags:
- relation-extraction
dataset_info:
features:
- name: dialog
sequence: string
- name: relation_data
sequence:
- name: x
dtype: string
- name: y
dtype: string
- name: x_type
dtype: string
- name: y_type
dtype: string
- name: r
sequence: string
- name: rid
sequence: int32
- name: t
sequence: string
config_name: dialog_re
splits:
- name: train
num_bytes: 1520940
num_examples: 1073
- name: test
num_bytes: 472306
num_examples: 357
- name: validation
num_bytes: 490580
num_examples: 358
download_size: 3816234
dataset_size: 2483826
---
# Dataset Card for [DialogRE]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DialogRE Homepage](https://dataset.org/dialogre/)
- **Repository:** [DialogRE Repository](https://github.com/nlpdata/dialogre)
- **Paper:** [Arxiv](https://arxiv.org/abs/2004.08056v1)
- **Point of Contact:** [dialogre@dataset.org](mailto:dialogre@dataset.org)
### Dataset Summary
The DialogRE dataset is the first human-annotated dialogue-based relation extraction (RE) dataset, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. DialogRE can also act as a platform for studying cross-sentence RE as most facts span multiple sentences. Specifically, the dataset annotate all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends (in English).
### Supported Tasks and Leaderboards
* `other-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists of the prediction of relation between two arguments that appear in a dialogue. Success on this task is typically measured by achieving a *high* [F1 Score](https://huggingface.co/metrics/f1).
### Languages
The dialogues in the dataset is in English originating from the transcripts of Friends. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point consists of a dialogue between speakers as a list of sentences. This is followed by the annotations of the relations between the entities in the dialog.
An example from the DialogRE train set looks as follows:
```
{'dialog': ["Speaker 1: It's been an hour and not one of my classmates has shown up! I tell you, when I actually die some people are gonna get seriously haunted!",
'Speaker 2: There you go! Someone came!',
"Speaker 1: Ok, ok! I'm gonna go hide! Oh, this is so exciting, my first mourner!",
'Speaker 3: Hi, glad you could come.',
'Speaker 2: Please, come in.',
"Speaker 4: Hi, you're Chandler Bing, right? I'm Tom Gordon, I was in your class.",
'Speaker 2: Oh yes, yes... let me... take your coat.',
"Speaker 4: Thanks... uh... I'm so sorry about Ross, it's...",
'Speaker 2: At least he died doing what he loved... watching blimps.',
'Speaker 1: Who is he?',
'Speaker 2: Some guy, Tom Gordon.',
"Speaker 1: I don't remember him, but then again I touched so many lives.",
'Speaker 3: So, did you know Ross well?',
"Speaker 4: Oh, actually I barely knew him. Yeah, I came because I heard Chandler's news. D'you know if he's seeing anyone?",
'Speaker 3: Yes, he is. Me.',
'Speaker 4: What? You... You... Oh! Can I ask you a personal question? Ho-how do you shave your beard so close?',
"Speaker 2: Ok Tommy, that's enough mourning for you! Here we go, bye bye!!",
'Speaker 4: Hey, listen. Call me.',
'Speaker 2: Ok!'],
'relation_data': {'r': [['per:alternate_names'],
['per:alumni'],
['per:alternate_names'],
['per:alumni', 'per:positive_impression'],
['per:alternate_names'],
['unanswerable']],
'rid': [[30], [4], [30], [4, 1], [30], [37]],
't': [[''], [''], [''], ['', 'call me'], [''], ['']],
'x': ['Speaker 2',
'Speaker 2',
'Speaker 4',
'Speaker 4',
'Speaker 4',
'Speaker 1'],
'x_type': ['PER', 'PER', 'PER', 'PER', 'PER', 'PER'],
'y': ['Chandler Bing',
'Speaker 4',
'Tom Gordon',
'Speaker 2',
'Tommy',
'Tommy'],
'y_type': ['PER', 'PER', 'PER', 'PER', 'PER', 'PER']}}
```
### Data Fields
* `dialog`
* List of dialog spoken between the speakers
* List of annotations per dialog per argument
* `x` : First entity
* `y` : Second entity
* `x_type` : Type of the first entity
* `y_type`: Type of the second entity
* `r` : List of relations
* `rid`: List of relation IDs
* `t`: List of relation Trigger words
### Data Splits
The data is split into a training, validation and test set as per the original dataset split.
| | train | validation | test |
| --------------------- |-------:|------------:|------:|
| Input dialog examples | 1073 | 358 | 357 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
DialogRE dataset is intended for non-commercial research purpose only
### Citation Information
```
@inproceedings{yu2020dialogue,
title={Dialogue-Based Relation Extraction},
author={Yu, Dian and Sun, Kai and Cardie, Claire and Yu, Dong},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020},
url={https://arxiv.org/abs/2004.08056v1}
}
```
### Contributions
Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset. |
hausa_voa_topics | 2023-01-25T14:31:55.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ha",
"license:unknown",
"region:us"
] | null | A collection of news article headlines in Hausa from VOA Hausa.
Each headline is labeled with one of the following classes: Nigeria,
Africa, World, Health or Politics.
The dataset was presented in the paper:
Hedderich, Adelani, Zhu, Alabi, Markus, Klakow: Transfer Learning and
Distant Supervision for Multilingual Transformer Models: A Study on
African Languages (EMNLP 2020). | @inproceedings{hedderich-etal-2020-transfer,
title = "Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages",
author = "Hedderich, Michael A. and
Adelani, David and
Zhu, Dawei and
Alabi, Jesujoba and
Markus, Udia and
Klakow, Dietrich",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.204",
doi = "10.18653/v1/2020.emnlp-main.204",
} | null | 0 | 10 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ha
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: Hausa Voa News Topic Classification Dataset (HausaVoaTopics)
dataset_info:
features:
- name: news_title
dtype: string
- name: label
dtype:
class_label:
names:
'0': Africa
'1': Health
'2': Nigeria
'3': Politics
'4': World
splits:
- name: train
num_bytes: 144932
num_examples: 2045
- name: validation
num_bytes: 20565
num_examples: 290
- name: test
num_bytes: 41195
num_examples: 582
download_size: 195824
dataset_size: 206692
---
# Dataset Card for Hausa VOA News Topic Classification dataset (hausa_voa_topics)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** -
- **Repository:** https://github.com/uds-lsv/transfer-distant-transformer-african
- **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.204/
- **Leaderboard:** -
- **Point of Contact:** Michael A. Hedderich and David Adelani
{mhedderich, didelani} (at) lsv.uni-saarland.de
### Dataset Summary
A news headline topic classification dataset, similar to AG-news, for Hausa. The news headlines were collected from [VOA Hausa](https://www.voahausa.com/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Hausa (ISO 639-1: ha)
## Dataset Structure
### Data Instances
An instance consists of a news title sentence and the corresponding topic label.
### Data Fields
- `news_title`: A news title
- `label`: The label describing the topic of the news title. Can be one of the following classes: Nigeria, Africa, World, Health or Politics.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset. |
hippocorpus | 2022-11-03T16:15:25.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"narrative-flow",
"region:us"
] | null | To examine the cognitive processes of remembering and imagining and their traces in language, we introduce Hippocorpus, a dataset of 6,854 English diary-like short stories about recalled and imagined events. Using a crowdsourcing framework, we first collect recalled stories and summaries from workers, then provide these summaries to other workers who write imagined stories. Finally, months later, we collect a retold version of the recalled stories from a subset of recalled authors. Our dataset comes paired with author demographics (age, gender, race), their openness to experience, as well as some variables regarding the author's relationship to the event (e.g., how personal the event is, how often they tell its story, etc.). | @inproceedings{sap-etal-2020-recollection,
title = "Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models",
author = "Sap, Maarten and
Horvitz, Eric and
Choi, Yejin and
Smith, Noah A. and
Pennebaker, James",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.178",
doi = "10.18653/v1/2020.acl-main.178",
pages = "1970--1978",
abstract = "We investigate the use of NLP as a measure of the cognitive processes involved in storytelling, contrasting imagination and recollection of events. To facilitate this, we collect and release Hippocorpus, a dataset of 7,000 stories about imagined and recalled events. We introduce a measure of narrative flow and use this to examine the narratives for imagined and recalled events. Additionally, we measure the differential recruitment of knowledge attributed to semantic memory versus episodic memory (Tulving, 1972) for imagined and recalled storytelling by comparing the frequency of descriptions of general commonsense events with more specific realis events. Our analyses show that imagined stories have a substantially more linear narrative flow, compared to recalled stories in which adjacent sentences are more disconnected. In addition, while recalled stories rely more on autobiographical events based on episodic memory, imagined stories express more commonsense knowledge based on semantic memory. Finally, our measures reveal the effect of narrativization of memories in stories (e.g., stories about frequently recalled memories flow more linearly; Bartlett, 1932). Our findings highlight the potential of using NLP tools to study the traces of human cognition in language.",
} | null | 3 | 10 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
paperswithcode_id: null
pretty_name: hippocorpus
tags:
- narrative-flow
dataset_info:
features:
- name: AssignmentId
dtype: string
- name: WorkTimeInSeconds
dtype: string
- name: WorkerId
dtype: string
- name: annotatorAge
dtype: float32
- name: annotatorGender
dtype: string
- name: annotatorRace
dtype: string
- name: distracted
dtype: float32
- name: draining
dtype: float32
- name: frequency
dtype: float32
- name: importance
dtype: float32
- name: logTimeSinceEvent
dtype: string
- name: mainEvent
dtype: string
- name: memType
dtype: string
- name: mostSurprising
dtype: string
- name: openness
dtype: string
- name: recAgnPairId
dtype: string
- name: recImgPairId
dtype: string
- name: similarity
dtype: string
- name: similarityReason
dtype: string
- name: story
dtype: string
- name: stressful
dtype: string
- name: summary
dtype: string
- name: timeSinceEvent
dtype: string
splits:
- name: train
num_bytes: 7229795
num_examples: 6854
download_size: 0
dataset_size: 7229795
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Hippocorpus](https://msropendata.com/datasets/0a83fb6f-a759-4a17-aaa2-fbac84577318)
- **Repository:** [Hippocorpus](https://msropendata.com/datasets/0a83fb6f-a759-4a17-aaa2-fbac84577318)
- **Paper:** [Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models](http://erichorvitz.com/cognitive_studies_narrative.pdf)
- **Point of Contact:** [Eric Horvitz](mailto:horvitz@microsoft.com)
### Dataset Summary
To examine the cognitive processes of remembering and imagining and their traces in language, we introduce Hippocorpus, a dataset of 6,854 English diary-like short stories about recalled and imagined events. Using a crowdsourcing framework, we first collect recalled stories and summaries from workers, then provide these summaries to other workers who write imagined stories. Finally, months later, we collect a retold version of the recalled stories from a subset of recalled authors. Our dataset comes paired with author demographics (age, gender, race), their openness to experience, as well as some variables regarding the author's relationship to the event (e.g., how personal the event is, how often they tell its story, etc.).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset can be found in English
## Dataset Structure
[More Information Needed]
### Data Instances
[More Information Needed]
### Data Fields
This CSV file contains all the stories in Hippcorpus v2 (6854 stories)
These are the columns in the file:
- `AssignmentId`: Unique ID of this story
- `WorkTimeInSeconds`: Time in seconds that it took the worker to do the entire HIT (reading instructions, storywriting, questions)
- `WorkerId`: Unique ID of the worker (random string, not MTurk worker ID)
- `annotatorAge`: Lower limit of the age bucket of the worker. Buckets are: 18-24, 25-29, 30-34, 35-39, 40-44, 45-49, 50-54, 55+
- `annotatorGender`: Gender of the worker
- `annotatorRace`: Race/ethnicity of the worker
- `distracted`: How distracted were you while writing your story? (5-point Likert)
- `draining`: How taxing/draining was writing for you emotionally? (5-point Likert)
- `frequency`: How often do you think about or talk about this event? (5-point Likert)
- `importance`: How impactful, important, or personal is this story/this event to you? (5-point Likert)
- `logTimeSinceEvent`: Log of time (days) since the recalled event happened
- `mainEvent`: Short phrase describing the main event described
- `memType`: Type of story (recalled, imagined, retold)
- `mostSurprising`: Short phrase describing what the most surpring aspect of the story was
- `openness`: Continuous variable representing the openness to experience of the worker
- `recAgnPairId`: ID of the recalled story that corresponds to this retold story (null for imagined stories). Group on this variable to get the recalled-retold pairs.
- `recImgPairId`: ID of the recalled story that corresponds to this imagined story (null for retold stories). Group on this variable to get the recalled-imagined pairs.
- `similarity`: How similar to your life does this event/story feel to you? (5-point Likert)
- `similarityReason`: Free text annotation of similarity
- `story`: Story about the imagined or recalled event (15-25 sentences)
- `stressful`: How stressful was this writing task? (5-point Likert)
- `summary`: Summary of the events in the story (1-3 sentences)
- `timeSinceEvent`: Time (num. days) since the recalled event happened
### Data Splits
[More Information Needed]
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
The dataset was initially created by Maarten Sap, Eric Horvitz, Yejin Choi, Noah A. Smith, James W. Pennebaker, during work done at Microsoft Research.
### Licensing Information
Hippocorpus is distributed under the [Open Use of Data Agreement v1.0](https://msropendata-web-api.azurewebsites.net/licenses/f1f352a6-243f-4905-8e00-389edbca9e83/view).
### Citation Information
```
@inproceedings{sap-etal-2020-recollection,
title = "Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models",
author = "Sap, Maarten and
Horvitz, Eric and
Choi, Yejin and
Smith, Noah A. and
Pennebaker, James",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.178",
doi = "10.18653/v1/2020.acl-main.178",
pages = "1970--1978",
abstract = "We investigate the use of NLP as a measure of the cognitive processes involved in storytelling, contrasting imagination and recollection of events. To facilitate this, we collect and release Hippocorpus, a dataset of 7,000 stories about imagined and recalled events. We introduce a measure of narrative flow and use this to examine the narratives for imagined and recalled events. Additionally, we measure the differential recruitment of knowledge attributed to semantic memory versus episodic memory (Tulving, 1972) for imagined and recalled storytelling by comparing the frequency of descriptions of general commonsense events with more specific realis events. Our analyses show that imagined stories have a substantially more linear narrative flow, compared to recalled stories in which adjacent sentences are more disconnected. In addition, while recalled stories rely more on autobiographical events based on episodic memory, imagined stories express more commonsense knowledge based on semantic memory. Finally, our measures reveal the effect of narrativization of memories in stories (e.g., stories about frequently recalled memories flow more linearly; Bartlett, 1932). Our findings highlight the potential of using NLP tools to study the traces of human cognition in language.",
}
```
### Contributions
Thanks to [@manandey](https://github.com/manandey) for adding this dataset. |
id_panl_bppt | 2023-01-25T14:32:43.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:id",
"license:unknown",
"region:us"
] | null | Parallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and
Application of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing
Capacity in Asia). The dataset contains around 24K sentences divided in 4 difference topics (Economic, international,
Science and Technology and Sport). | @inproceedings{id_panl_bppt,
author = {PAN Localization - BPPT},
title = {Parallel Text Corpora, English Indonesian},
year = {2009},
url = {http://digilib.bppt.go.id/sampul/p92-budiono.pdf},
} | null | 1 | 10 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
- id
license:
- unknown
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: IdPanlBppt
dataset_info:
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- id
- name: topic
dtype:
class_label:
names:
'0': Economy
'1': International
'2': Science
'3': Sport
config_name: id_panl_bppt
splits:
- name: train
num_bytes: 7455924
num_examples: 24021
download_size: 2366973
dataset_size: 7455924
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PANL BPPT](http://digilib.bppt.go.id/sampul/p92-budiono.pdf)
- **Repository:** [PANL BPPT Repository](https://github.com/cahya-wirawan/indonesian-language-models/raw/master/data/BPPTIndToEngCorpusHalfM.zip)
- **Paper:** [Resource Report: Building Parallel Text Corpora for Multi-Domain Translation System](http://digilib.bppt.go.id/sampul/p92-budiono.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Parallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and
Application of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing
Capacity in Asia). The dataset contains around 24K sentences divided in 4 difference topics (Economic, international,
Science and Technology and Sport).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Indonesian
## Dataset Structure
[More Information Needed]
### Data Instances
An example of the dataset:
```
{
'id': '0',
'topic': 0,
'translation':
{
'en': 'Minister of Finance Sri Mulyani Indrawati said that a sharp correction of the composite
inde x by up to 4 pct in Wedenesday?s trading was a mere temporary effect of regional factors like
decline in plantation commodity prices and the financial crisis in Thailand.',
'id': 'Menteri Keuangan Sri Mulyani mengatakan koreksi tajam pada Indeks Harga Saham Gabungan
IHSG hingga sekitar 4 persen dalam perdagangan Rabu 10/1 hanya efek sesaat dari faktor-faktor regional
seperti penurunan harga komoditi perkebunan dan krisis finansial di Thailand.'
}
}
```
### Data Fields
- `id`: id of the sample
- `translation`: the parallel sentence english-indonesian
- `topic`: the topic of the sentence. It could be one of the following:
- Economic
- International
- Science and Technology
- Sport
### Data Splits
The dataset is splitted in to train, validation and test sets.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{id_panl_bppt,
author = {PAN Localization - BPPT},
title = {Parallel Text Corpora, English Indonesian},
year = {2009},
url = {http://digilib.bppt.go.id/sampul/p92-budiono.pdf},
}
```
### Contributions
Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset. |
kd_conv | 2023-03-28T14:17:47.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"lan... | null | KdConv is a Chinese multi-domain Knowledge-driven Conversionsation dataset, grounding the topics in multi-turn conversations to knowledge graphs. KdConv contains 4.5K conversations from three domains (film, music, and travel), and 86K utterances with an average turn number of 19.0. These conversations contain in-depth discussions on related topics and natural transition between multiple topics, while the corpus can also used for exploration of transfer learning and domain adaptation.\ | @inproceedings{zhou-etal-2020-kdconv,
title = "{K}d{C}onv: A {C}hinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation",
author = "Zhou, Hao and
Zheng, Chujie and
Huang, Kaili and
Huang, Minlie and
Zhu, Xiaoyan",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.635",
doi = "10.18653/v1/2020.acl-main.635",
pages = "7098--7108",
} | null | 9 | 10 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
language:
- zh
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: kdconv
pretty_name: Knowledge-driven Conversation
dataset_info:
- config_name: travel_dialogues
features:
- name: messages
sequence:
- name: message
dtype: string
- name: attrs
sequence:
- name: attrname
dtype: string
- name: attrvalue
dtype: string
- name: name
dtype: string
- name: name
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 3241550
num_examples: 1200
- name: test
num_bytes: 793883
num_examples: 150
- name: validation
num_bytes: 617177
num_examples: 150
download_size: 11037768
dataset_size: 4652610
- config_name: travel_knowledge_base
features:
- name: head_entity
dtype: string
- name: kb_triplets
sequence:
sequence: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 1517024
num_examples: 1154
download_size: 11037768
dataset_size: 1517024
- config_name: music_dialogues
features:
- name: messages
sequence:
- name: message
dtype: string
- name: attrs
sequence:
- name: attrname
dtype: string
- name: attrvalue
dtype: string
- name: name
dtype: string
- name: name
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 3006192
num_examples: 1200
- name: test
num_bytes: 801012
num_examples: 150
- name: validation
num_bytes: 633905
num_examples: 150
download_size: 11037768
dataset_size: 4441109
- config_name: music_knowledge_base
features:
- name: head_entity
dtype: string
- name: kb_triplets
sequence:
sequence: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 5980643
num_examples: 4441
download_size: 11037768
dataset_size: 5980643
- config_name: film_dialogues
features:
- name: messages
sequence:
- name: message
dtype: string
- name: attrs
sequence:
- name: attrname
dtype: string
- name: attrvalue
dtype: string
- name: name
dtype: string
- name: name
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 4867659
num_examples: 1200
- name: test
num_bytes: 956995
num_examples: 150
- name: validation
num_bytes: 884232
num_examples: 150
download_size: 11037768
dataset_size: 6708886
- config_name: film_knowledge_base
features:
- name: head_entity
dtype: string
- name: kb_triplets
sequence:
sequence: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 10500882
num_examples: 8090
download_size: 11037768
dataset_size: 10500882
- config_name: all_dialogues
features:
- name: messages
sequence:
- name: message
dtype: string
- name: attrs
sequence:
- name: attrname
dtype: string
- name: attrvalue
dtype: string
- name: name
dtype: string
- name: name
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 11115313
num_examples: 3600
- name: test
num_bytes: 2551802
num_examples: 450
- name: validation
num_bytes: 2135226
num_examples: 450
download_size: 11037768
dataset_size: 15802341
- config_name: all_knowledge_base
features:
- name: head_entity
dtype: string
- name: kb_triplets
sequence:
sequence: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 17998529
num_examples: 13685
download_size: 11037768
dataset_size: 17998529
---
# Dataset Card for KdConv
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/thu-coai/KdConv)
- **Paper:** [{K}d{C}onv: A {C}hinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation](https://www.aclweb.org/anthology/2020.acl-main.635.pdf)
### Dataset Summary
KdConv is a Chinese multi-domain Knowledge-driven Conversionsation dataset, grounding the topics in multi-turn
conversations to knowledge graphs. KdConv contains 4.5K conversations from three domains (film, music, and travel),
and 86K utterances with an average turn number of 19.0. These conversations contain in-depth discussions on related
topics and natural transition between multiple topics, while the corpus can also used for exploration of transfer
learning and domain adaptation.
### Supported Tasks and Leaderboards
This dataset can be leveraged for dialogue modelling tasks involving multi-turn and Knowledge base setup.
### Languages
This dataset has only Chinese Language.
## Dataset Structure
### Data Instances
Each data instance is a multi-turn conversation between 2 people with annotated knowledge base data used while talking
, e.g.:
```
{
"messages": [
{
"message": "对《我喜欢上你时的内心活动》这首歌有了解吗?"
},
{
"attrs": [
{
"attrname": "Information",
"attrvalue": "《我喜欢上你时的内心活动》是由韩寒填词,陈光荣作曲,陈绮贞演唱的歌曲,作为电影《喜欢你》的主题曲于2017年4月10日首发。2018年,该曲先后提名第37届香港电影金像奖最佳原创电影歌曲奖、第7届阿比鹿音乐奖流行单曲奖。",
"name": "我喜欢上你时的内心活动"
}
],
"message": "有些了解,是电影《喜欢你》的主题曲。"
},
...
{
"attrs": [
{
"attrname": "代表作品",
"attrvalue": "旅行的意义",
"name": "陈绮贞"
},
{
"attrname": "代表作品",
"attrvalue": "时间的歌",
"name": "陈绮贞"
}
],
"message": "我还知道《旅行的意义》与《时间的歌》,都算是她的代表作。"
},
{
"message": "好,有时间我找出来听听。"
}
],
"name": "我喜欢上你时的内心活动"
}
```
The corresponding entries in Knowledge base is a dictionary with list of knowledge base triplets (head entity
, relationship, tail entity), e.g.:
```
"忽然之间": [
[
"忽然之间",
"Information",
"《忽然之间》是歌手 莫文蔚演唱的歌曲,由 周耀辉, 李卓雄填词, 林健华谱曲,收录在莫文蔚1999年发行专辑《 就是莫文蔚》里。"
],
[
"忽然之间",
"谱曲",
"林健华"
]
...
]
```
### Data Fields
Conversation data fields:
- `name`: the starting topic (entity) of the conversation
- `domain`: the domain this sample belongs to. Categorical value among `{travel, film, music}`
- `messages`: list of all the turns in the dialogue. For each turn:
- `message`: the utterance
- `attrs`: list of knowledge graph triplets referred by the utterance. For each triplet:
- `name`: the head entity
- `attrname`: the relation
- `attrvalue`: the tail entity
Knowledge Base data fields:
- `head_entity`: the head entity
- `kb_triplets`: list of corresponding triplets
- `domain`: the domain this sample belongs to. Categorical value among `{travel, film, music}`
### Data Splits
The conversation dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | train | validation | test |
|--------|------:|-----------:|-----:|
| travel | 1200 | 1200 | 1200 |
| film | 1200 | 150 | 150 |
| music | 1200 | 150 | 150 |
| all | 3600 | 450 | 450 |
The Knowledge base dataset is having only train split with following sizes:
| | train |
|--------|------:|
| travel | 1154 |
| film | 8090 |
| music | 4441 |
| all | 13685 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Apache License 2.0
### Citation Information
```
@inproceedings{zhou-etal-2020-kdconv,
title = "{K}d{C}onv: A {C}hinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation",
author = "Zhou, Hao and
Zheng, Chujie and
Huang, Kaili and
Huang, Minlie and
Zhu, Xiaoyan",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.635",
doi = "10.18653/v1/2020.acl-main.635",
pages = "7098--7108",
}
```
### Contributions
Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset. |
ronec | 2023-01-25T14:43:21.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ro",
"license:mit"... | null | RONEC - the Romanian Named Entity Corpus, at version 2.0, holds 12330 sentences with over 0.5M tokens, annotated with 15 classes, to a total of 80.283 distinctly annotated entities. It is used for named entity recognition and represents the largest Romanian NER corpus to date. | @article{dumitrescu2019introducing,
title={Introducing RONEC--the Romanian Named Entity Corpus},
author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius},
journal={arXiv preprint arXiv:1909.01247},
year={2019}
} | null | 0 | 10 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- found
language:
- ro
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: ronec
pretty_name: RONEC
dataset_info:
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_ids
sequence: int32
- name: space_after
sequence: bool
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-ORG
'4': I-ORG
'5': B-GPE
'6': I-GPE
'7': B-LOC
'8': I-LOC
'9': B-NAT_REL_POL
'10': I-NAT_REL_POL
'11': B-EVENT
'12': I-EVENT
'13': B-LANGUAGE
'14': I-LANGUAGE
'15': B-WORK_OF_ART
'16': I-WORK_OF_ART
'17': B-DATETIME
'18': I-DATETIME
'19': B-PERIOD
'20': I-PERIOD
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-NUMERIC
'26': I-NUMERIC
'27': B-ORDINAL
'28': I-ORDINAL
'29': B-FACILITY
'30': I-FACILITY
config_name: ronec
splits:
- name: train
num_bytes: 8701577
num_examples: 9000
- name: validation
num_bytes: 1266490
num_examples: 1330
- name: test
num_bytes: 1902224
num_examples: 2000
download_size: 14675943
dataset_size: 11870291
---
# Dataset Card for RONEC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/dumitrescustefan/ronec
- **Repository:** https://github.com/dumitrescustefan/ronec
- **Paper:** https://arxiv.org/abs/1909.01247
- **Leaderboard:** https://lirobenchmark.github.io/
- **Point of Contact:** [Stefan](dumitrescu.stefan@gmail.com) and [Andrei-Marius](avram.andreimarius@gmail.com)
### Dataset Summary
RONEC, at version 2.0, holds 12330 sentences with over 0.5M tokens, annotated with 15 classes, to a total of 80.283 distinctly annotated entities.
The corpus has the following classes and distribution in the train/valid/test splits:
| Classes | Total | Train | | Valid | | Test | |
|------------- |:------: |:------: |:-------: |:------: |:-------: |:------: |:-------: |
| | # | # | % | # | % | # | % |
| PERSON | **26130** | 19167 | 73.35 | 2733 | 10.46 | 4230 | 16.19 |
| GPE | **11103** | 8193 | 73.79 | 1182 | 10.65 | 1728 | 15.56 |
| LOC | **2467** | 1824 | 73.94 | 270 | 10.94 | 373 | 15.12 |
| ORG | **7880** | 5688 | 72.18 | 880 | 11.17 | 1312 | 16.65 |
| LANGUAGE | **467** | 342 | 73.23 | 52 | 11.13 | 73 | 15.63 |
| NAT_REL_POL | **4970** | 3673 | 73.90 | 516 | 10.38 | 781 | 15.71 |
| DATETIME | **9614** | 6960 | 72.39 | 1029 | 10.7 | 1625 | 16.9 |
| PERIOD | **1188** | 862 | 72.56 | 129 | 10.86 | 197 | 16.58 |
| QUANTITY | **1588** | 1161 | 73.11 | 181 | 11.4 | 246 | 15.49 |
| MONEY | **1424** | 1041 | 73.10 | 159 | 11.17 | 224 | 15.73 |
| NUMERIC | **7735** | 5734 | 74.13 | 814 | 10.52 | 1187 | 15.35 |
| ORDINAL | **1893** | 1377 | 72.74 | 212 | 11.2 | 304 | 16.06 |
| FACILITY | **1126** | 840 | 74.6 | 113 | 10.04 | 173 | 15.36 |
| WORK_OF_ART | **1596** | 1157 | 72.49 | 176 | 11.03 | 263 | 16.48 |
| EVENT | **1102** | 826 | 74.95 | 107 | 9.71 | 169 | 15.34 |
### Supported Tasks and Leaderboards
The corpus is meant to train Named Entity Recognition models for the Romanian language.
Please see the leaderboard here : [https://lirobenchmark.github.io/](https://lirobenchmark.github.io/)
### Languages
RONEC is in Romanian (`ro`)
## Dataset Structure
### Data Instances
The dataset is a list of instances. For example, an instance looks like:
```json
{
"id": 10454,
"tokens": ["Pentru", "a", "vizita", "locația", "care", "va", "fi", "pusă", "la", "dispoziția", "reprezentanților", "consiliilor", "județene", ",", "o", "delegație", "a", "U.N.C.J.R.", ",", "din", "care", "a", "făcut", "parte", "și", "dl", "Constantin", "Ostaficiuc", ",", "președintele", "C.J.T.", ",", "a", "fost", "prezentă", "la", "Bruxelles", ",", "între", "1-3", "martie", "."],
"ner_tags": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "O", "O", "O", "O", "O", "O", "B-ORG", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "I-PERSON", "I-PERSON", "I-PERSON", "B-ORG", "O", "O", "O", "O", "O", "B-GPE", "O", "B-PERIOD", "I-PERIOD", "I-PERIOD", "O"],
"ner_ids": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 3, 0, 0, 0, 0, 0, 5, 0, 19, 20, 20, 0],
"space_after": [true, true, true, true, true, true, true, true, true, true, true, true, false, true, true, true, true, false, true, true, true, true, true, true, true, true, true, false, true, true, false, true, true, true, true, true, false, true, true, true, false, false]
}
```
### Data Fields
The fields of each examples are:
- ``tokens`` are the words of the sentence.
- ``ner_tags`` are the string tags assigned to each token, following the BIO2 format. For example, the span ``"între", "1-3", "martie"`` has three tokens, but is a single class ``PERIOD``, marked as ``"B-PERIOD", "I-PERIOD", "I-PERIOD"``.
- ``ner_ids`` are the integer encoding of each tag, to be compatible with the standard and to be quickly used for model training. Note that each ``B``-starting tag is odd, and each ``I``-starting tag is even.
- ``space_after`` is used to help if there is a need to detokenize the dataset. A ``true`` value means that there is a space after the token on that respective position.
### Data Splits
The dataset is split in train: 9000 sentences, dev: 1330 sentence and test: 2000 sentences.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
*The corpus data source represents sentences that are free of copyright, taken from older datasets like the freely available SEETimes and more recent datasources like the Romanian Wikipedia or the Common Crawl.*
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
The corpus was annotated with the following classes:
1. PERSON - proper nouns, including common nouns or pronouns if they refer to a person. (e.g. 'sister')
2. GPE - geo political entity, like a city or a country; has to have a governance form
3. LOC - location, like a sea, continent, region, road, address, etc.
4. ORG - organization
5. LANGUAGE - language (e.g. Romanian, French, etc.)
6. NAT_REL_POL - national, religious or political organizations
7. DATETIME - a time and date in any format, including references to time (e.g. 'yesterday')
8. PERIOD - a period that is precisely bounded by two date times
9. QUANTITY - a quantity that is not numerical; it has a unit of measure
10. MONEY - a monetary value, numeric or otherwise
11. NUMERIC - a simple numeric value, represented as digits or words
12. ORDINAL - an ordinal value like 'first', 'third', etc.
13. FACILITY - a named place that is easily recognizable
14. WORK_OF_ART - a work of art like a named TV show, painting, etc.
15. EVENT - a named recognizable or periodic major event
#### Annotation process
The corpus was annotated by 3 language experts, and was cross-checked for annotation consistency. The annotation took several months to complete, but the result is a high quality dataset.
#### Who are the annotators?
Stefan Dumitrescu (lead).
### Personal and Sensitive Information
All the source data is already freely downloadable and usable online, so there are no privacy concerns.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
MIT License
### Citation Information
```bibtex
@article{dumitrescu2019introducing,
title={Introducing RONEC--the Romanian Named Entity Corpus},
author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius},
journal={arXiv preprint arXiv:1909.01247},
year={2019}
}
```
### Contributions
Thanks to [@iliemihai](https://github.com/iliemihai) for adding v1.0 of the dataset. |
sanskrit_classic | 2022-11-03T16:07:56.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:sa",... | null | This dataset combines some of the classical Sanskrit texts. | @Misc{johnsonetal2014,
author = {Johnson, Kyle P. and Patrick Burns and John Stewart and Todd Cook},
title = {CLTK: The Classical Language Toolkit},
url = {https://github.com/cltk/cltk},
year = {2014--2020},
} | null | 2 | 10 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- sa
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: SanskritClassic
dataset_info:
features:
- name: text
dtype: string
config_name: combined
splits:
- name: train
num_bytes: 40299787
num_examples: 342033
download_size: 7258904
dataset_size: 40299787
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[sanskrit_classic](https://github.com/parmarsuraj99/hf_datasets/tree/master/sanskrit_classic)
- **Repository:**[GitHub](https://github.com/parmarsuraj99/hf_datasets/tree/master/sanskrit_classic)
- **Paper:**N/A
- **Leaderboard:**N/A
- **Point of Contact:**[parmarsuraj99](parmarsuraj99@gmail.com)
### Dataset Summary
A collection of classical sanskrit texts
### Supported Tasks and Leaderboards
Language modeling
### Languages
Sanskrit
## Dataset Structure
### Data Instances
{'text': 'मा कर्मफलहेतुर्भूर्मा ते सङ्गोऽस्त्वकर्मणि॥'}
### Data Fields
`text`: a line
### Data Splits
| | Train |
|-------------------|--------|
| n_instances | 342033 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@Misc{johnsonetal2014,
author = {Johnson, Kyle P. and Patrick Burns and John Stewart and Todd Cook},
title = {CLTK: The Classical Language Toolkit},
url = {https://github.com/cltk/cltk},
year = {2014--2020},
}
```
### Contributions
Thanks to [@parmarsuraj99](https://github.com/parmarsuraj99) for adding this dataset. |
sem_eval_2020_task_11 | 2023-01-25T14:43:56.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:unknown",
"propaganda-span-identification",
... | null | Propagandistic news articles use specific techniques to convey their message,
such as whataboutism, red Herring, and name calling, among many others.
The Propaganda Techniques Corpus (PTC) allows to study automatic algorithms to
detect them. We provide a permanent leaderboard to allow researchers both to
advertise their progress and to be up-to-speed with the state of the art on the
tasks offered (see below for a definition). | @misc{martino2020semeval2020,
title={SemEval-2020 Task 11: Detection of Propaganda Techniques in News Articles},
author={G. Da San Martino and A. Barrón-Cedeño and H. Wachsmuth and R. Petrov and P. Nakov},
year={2020},
eprint={2009.02696},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 5 | 10 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
- token-classification
task_ids: []
pretty_name: SemEval-2020 Task 11
tags:
- propaganda-span-identification
- propaganda-technique-classification
dataset_info:
features:
- name: article_id
dtype: string
- name: text
dtype: string
- name: span_identification
sequence:
- name: start_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: technique_classification
sequence:
- name: start_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: technique
dtype:
class_label:
names:
'0': Appeal_to_Authority
'1': Appeal_to_fear-prejudice
'2': Bandwagon,Reductio_ad_hitlerum
'3': Black-and-White_Fallacy
'4': Causal_Oversimplification
'5': Doubt
'6': Exaggeration,Minimisation
'7': Flag-Waving
'8': Loaded_Language
'9': Name_Calling,Labeling
'10': Repetition
'11': Slogans
'12': Thought-terminating_Cliches
'13': Whataboutism,Straw_Men,Red_Herring
splits:
- name: train
num_bytes: 2358613
num_examples: 371
- name: test
num_bytes: 454100
num_examples: 90
- name: validation
num_bytes: 396410
num_examples: 75
download_size: 0
dataset_size: 3209123
---
# Dataset Card for SemEval-2020 Task 11
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PTC TASKS ON "DETECTION OF PROPAGANDA TECHNIQUES IN NEWS ARTICLES"](https://propaganda.qcri.org/ptc/index.html)
- **Paper:** [SemEval-2020 Task 11: Detection of Propaganda Techniques in News Articles](https://arxiv.org/abs/2009.02696)
- **Leaderboard:** [PTC Tasks Leaderboard](https://propaganda.qcri.org/ptc/leaderboard.php)
- **Point of Contact:** [Task organizers contact](semeval-2020-task-11-organizers@googlegroups.com)
### Dataset Summary
Propagandistic news articles use specific techniques to convey their message, such as whataboutism, red Herring, and name calling, among many others. The Propaganda Techniques Corpus (PTC) allows to study automatic algorithms to detect them. We provide a permanent leaderboard to allow researchers both to advertise their progress and to be up-to-speed with the state of the art on the tasks offered (see below for a definition).
### Supported Tasks and Leaderboards
More information on scoring methodology can be found in [propaganda tasks evaluation document](https://propaganda.qcri.org/ptc/data/propaganda_tasks_evaluation.pdf)
### Languages
This dataset consists of English news articles
## Dataset Structure
### Data Instances
Each example is structured as follows:
```
{
"span_identification": {
"end_char_offset": [720, 6322, ...],
"start_char_offset": [683, 6314, ...]
},
"technique_classification": {
"end_char_offset": [720,6322, ...],
"start_char_offset": [683,6314, ...],
"technique": [7,8, ...]
},
"text": "Newt Gingrich: The truth about Trump, Putin, and Obama\n\nPresident Trump..."
}
```
### Data Fields
- `text`: The full text of the news article.
- `span_identification`: a dictionary feature containing:
- `start_char_offset`: The start character offset of the span for the SI task
- `end_char_offset`: The end character offset of the span for the SI task
- `technique_classification`: a dictionary feature containing:
- `start_char_offset`: The start character offset of the span for the TC task
- `end_char_offset`: The start character offset of the span for the TC task
- `technique`: the propaganda technique classification label, with possible values including `Appeal_to_Authority`, `Appeal_to_fear-prejudice`, `Bandwagon,Reductio_ad_hitlerum`, `Black-and-White_Fallacy`, `Causal_Oversimplification`.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Sentences | 371 | 75 | 90 |
| Total Annotations SI | 5468 | 940 | 0 |
| Total Annotations TC | 6128 | 1063 | 0 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
In order to build the PTC-SemEval20 corpus, we retrieved a sample of news articles from the period
starting in mid-2017 and ending in early 2019. We selected 13 propaganda and 36 non-propaganda news
media outlets, as labeled by Media Bias/Fact Check,3
and we retrieved articles from these sources. We
deduplicated the articles on the basis of word n-grams matching (Barron-Cede ´ no and Rosso, 2009) and ˜
we discarded faulty entries (e.g., empty entries from blocking websites).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The annotation job consisted of both spotting a propaganda snippet and, at the same time, labeling
it with a specific propaganda technique. The annotation guidelines are shown in the appendix; they
are also available online.4 We ran the annotation in two phases: (i) two annotators label an article
independently and (ii) the same two annotators gather together with a consolidator to discuss dubious
instances (e.g., spotted only by one annotator, boundary discrepancies, label mismatch, etc.). This protocol
was designed after a pilot annotation stage, in which a relatively large number of snippets had been spotted
by one annotator only. The annotation team consisted of six professional annotators from A Data Pro trained to spot and label the propaganda snippets from free text. The job was carried out on an instance of
the Anafora annotation platform (Chen and Styler, 2013), which we tailored for our propaganda annotation
task.
We evaluated the annotation process in terms of γ agreement (Mathet et al., 2015) between each of
the annotators and the final gold labels. The γ agreement on the annotated articles is on average 0.6;
see (Da San Martino et al., 2019b) for a more detailed discussion of inter-annotator agreement. The
training and the development part of the PTC-SemEval20 corpus are the same as the training and the
testing datasets described in (Da San Martino et al., 2019b). The test part of the PTC-SemEval20 corpus
consists of 90 additional articles selected from the same sources as for training and development. For
the test articles, we further extended the annotation process by adding one extra consolidation step: we
revisited all the articles in that partition and we performed the necessary adjustments to the spans and to
the labels as necessary, after a thorough discussion and convergence among at least three experts who
were not involved in the initial annotations.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{martino2020semeval2020,
title={SemEval-2020 Task 11: Detection of Propaganda Techniques in News Articles},
author={G. Da San Martino and A. Barrón-Cedeño and H. Wachsmuth and R. Petrov and P. Nakov},
year={2020},
eprint={2009.02696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@ZacharySBrown](https://github.com/ZacharySBrown) for adding this dataset. |
sharc | 2022-11-03T16:16:40.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0... | null | ShARC is a Conversational Question Answering dataset focussing on question answering from texts containing rules. The goal is to answer questions by possibly asking follow-up questions first. It is assumed assume that the question is often underspecified, in the sense that the question does not provide enough information to be answered directly. However, an agent can use the supporting rule text to infer what needs to be asked in order to determine the final answer. | @misc{saeidi2018interpretation,
title={Interpretation of Natural Language Rules in Conversational Machine Reading},
author={Marzieh Saeidi and Max Bartolo and Patrick Lewis and Sameer Singh and Tim Rocktäschel and Mike Sheldon and Guillaume Bouchard and Sebastian Riedel},
year={2018},
eprint={1809.01494},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 10 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: sharc
pretty_name: Shaping Answers with Rules through Conversation
tags:
- conversational-qa
dataset_info:
features:
- name: id
dtype: string
- name: utterance_id
dtype: string
- name: source_url
dtype: string
- name: snippet
dtype: string
- name: question
dtype: string
- name: scenario
dtype: string
- name: history
list:
- name: follow_up_question
dtype: string
- name: follow_up_answer
dtype: string
- name: evidence
list:
- name: follow_up_question
dtype: string
- name: follow_up_answer
dtype: string
- name: answer
dtype: string
- name: negative_question
dtype: bool_
- name: negative_scenario
dtype: bool_
config_name: sharc
splits:
- name: train
num_bytes: 15088577
num_examples: 21890
- name: validation
num_bytes: 1469172
num_examples: 2270
download_size: 5230207
dataset_size: 16557749
---
# Dataset Card for Shaping Answers with Rules through Conversation
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ShARC](https://sharc-data.github.io/index.html)
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
- **Paper:** [Interpretation of Natural Language Rules in Conversational Machine Reading](https://arxiv.org/abs/1809.01494)
- **Leaderboard:** [leaderboard](https://sharc-data.github.io/leaderboard.html)
- **Point of Contact:** [Marzieh Saeidi](marzieh.saeidi@gmail.com), [Max Bartolo](maxbartolo@gmail.com), [Patrick Lewis](patrick.s.h.lewis@gmail.com), [Sebastian Riedel](s.riedel@cs.ucl.ac.uk)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
ttc4900 | 2023-01-25T14:54:33.000Z | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:tr",
"license:unknown",
"news-category-classification",
"region:us"
] | null | The data set is taken from kemik group
http://www.kemik.yildiz.edu.tr/
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study http://journals.sagepub.com/doi/abs/10.1177/0165551515620551
If you use the dataset in a paper, please refer https://www.kaggle.com/savasy/ttc4900 as footnote and cite one of the papers as follows:
- A Comparison of Different Approaches to Document Representation in Turkish Language, SDU Journal of Natural and Applied Science, Vol 22, Issue 2, 2018
- A comparative analysis of text classification for Turkish language, Pamukkale University Journal of Engineering Science Volume 25 Issue 5, 2018
- A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014. | @article{doi:10.5505/pajes.2018.15931,
author = {Yıldırım, Savaş and Yıldız, Tuğba},
title = {A comparative analysis of text classification for Turkish language},
journal = {Pamukkale Univ Muh Bilim Derg},
volume = {24},
number = {5},
pages = {879-886},
year = {2018},
doi = {10.5505/pajes.2018.15931},
note ={doi: 10.5505/pajes.2018.15931},
URL = {https://dx.doi.org/10.5505/pajes.2018.15931},
eprint = {https://dx.doi.org/10.5505/pajes.2018.15931}
} | null | 2 | 10 | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: TTC4900 - A Benchmark Data for Turkish Text Categorization
tags:
- news-category-classification
dataset_info:
features:
- name: category
dtype:
class_label:
names:
'0': siyaset
'1': dunya
'2': ekonomi
'3': kultur
'4': saglik
'5': spor
'6': teknoloji
- name: text
dtype: string
config_name: ttc4900
splits:
- name: train
num_bytes: 10640831
num_examples: 4900
download_size: 10627541
dataset_size: 10640831
---
# Dataset Card for TTC4900: A Benchmark Data for Turkish Text Categorization
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [TTC4900 Homepage](https://www.kaggle.com/savasy/ttc4900)
- **Repository:** [TTC4900 Repository](https://github.com/savasy/TurkishTextClassification)
- **Paper:** [A Comparison of Different Approaches to Document Representation in Turkish Language](https://dergipark.org.tr/en/pub/sdufenbed/issue/38975/456349)
- **Point of Contact:** [Savaş Yıldırım](mailto:savasy@gmail.com)
### Dataset Summary
The data set is taken from [kemik group](http://www.kemik.yildiz.edu.tr/)
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study ["A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014"](https://link.springer.com/chapter/10.1007/978-3-642-54903-8_36)
If you use the dataset in a paper, please refer https://www.kaggle.com/savasy/ttc4900 as footnote and cite one of the papers as follows:
- A Comparison of Different Approaches to Document Representation in Turkish Language, SDU Journal of Natural and Applied Science, Vol 22, Issue 2, 2018
- A comparative analysis of text classification for Turkish language, Pamukkale University Journal of Engineering Science Volume 25 Issue 5, 2018
- A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Turkish.
## Dataset Structure
### Data Instances
A text classification dataset with 7 different news category.
Here is an example from the dataset:
```
{
"category": 0, # politics/siyaset
"text": "paris teki infaz imralı ile başlayan sürece bir darbe mi elif_çakır ın sunduğu söz_bitmeden in bugünkü konuğu gazeteci melih altınok oldu programdan satıbaşları imralı ile görüşmeler hangi aşamada bundan sonra ne olacak hangi kesimler sürece engel oluyor psikolojik mayınlar neler türk solu bu dönemde evrensel sorumluluğunu yerine getirebiliyor mu elif_çakır sordu melih altınok söz_bitmeden de yanıtladı elif_çakır pkk nın silahsızlandırılmasına yönelik olarak öcalan ile görüşme sonrası 3 kadının infazı enteresan çünkü kurucu isimlerden birisi sen nasıl okudun bu infazı melih altınok herkesin ciddi anlamda şüpheleri var şu an yürüttüğümüz herşey bir delile dayanmadığı için komple teorisinden ibaret kalacak ama şöyle bir durum var imralı görüşmelerin ilk defa bir siyasi iktidar tarafından açıkça söylendiği bir dönem ardından geliyor bu sürecin gerçekleşmemesini isteyen kesimler yaptırmıştır dedi"
}
```
### Data Fields
- **category** : Indicates to which category the news text belongs.
(Such as "politics", "world", "economy", "culture", "health", "sports", "technology".)
- **text** : Contains the text of the news.
### Data Splits
It is not divided into Train set and Test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
#### Who are the source language producers?
Turkish online news sites.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by [Savaş Yıldırım](https://github.com/savasy)
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{doi:10.5505/pajes.2018.15931,
author = {Yıldırım, Savaş and Yıldız, Tuğba},
title = {A comparative analysis of text classification for Turkish language},
journal = {Pamukkale Univ Muh Bilim Derg},
volume = {24},
number = {5},
pages = {879-886},
year = {2018},
doi = {10.5505/pajes.2018.15931},
note ={doi: 10.5505/pajes.2018.15931},
URL = {https://dx.doi.org/10.5505/pajes.2018.15931},
eprint = {https://dx.doi.org/10.5505/pajes.2018.15931}
}
```
### Contributions
Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset. |
udhr | 2022-11-03T16:16:11.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:aa",
"language:ab",
"language:ace",
"language:acu",
"language:ada",
"language:ady",
"language:af",
"... | null | The Universal Declaration of Human Rights (UDHR) is a milestone document in the history of human rights. Drafted by
representatives with different legal and cultural backgrounds from all regions of the world, it set out, for the
first time, fundamental human rights to be universally protected. The Declaration was adopted by the UN General
Assembly in Paris on 10 December 1948 during its 183rd plenary meeting. The dataset includes translations of the
document in 464+ languages and dialects.
© 1996 – 2009 The Office of the High Commissioner for Human Rights
This plain text version prepared by the “UDHR in Unicode” project, https://www.unicode.org/udhr. | null | null | 1 | 10 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- aa
- ab
- ace
- acu
- ada
- ady
- af
- agr
- aii
- ajg
- als
- alt
- am
- amc
- ame
- ami
- amr
- ar
- arl
- arn
- ast
- auc
- ay
- az
- ban
- bax
- bba
- bci
- be
- bem
- bfa
- bg
- bho
- bi
- bik
- bin
- blt
- bm
- bn
- bo
- boa
- br
- bs
- buc
- bug
- bum
- ca
- cab
- cak
- cbi
- cbr
- cbs
- cbt
- cbu
- ccp
- ceb
- cfm
- ch
- chj
- chk
- chr
- cic
- cjk
- cjs
- cjy
- ckb
- cnh
- cni
- cnr
- co
- cof
- cot
- cpu
- crh
- cri
- crs
- cs
- csa
- csw
- ctd
- cy
- da
- dag
- ddn
- de
- dga
- dip
- duu
- dv
- dyo
- dyu
- dz
- ee
- el
- en
- eo
- es
- ese
- et
- eu
- eve
- evn
- fa
- fat
- fi
- fj
- fkv
- fo
- fon
- fr
- fuf
- fur
- fuv
- fvr
- fy
- ga
- gaa
- gag
- gan
- gd
- gjn
- gkp
- gl
- gld
- gn
- gsw
- gu
- guc
- guu
- gv
- gyr
- ha
- hak
- haw
- he
- hi
- hil
- hlt
- hmn
- hms
- hna
- hni
- hnj
- hns
- hr
- hsb
- hsn
- ht
- hu
- hus
- huu
- hy
- ia
- ibb
- id
- idu
- ig
- ii
- ijs
- ilo
- io
- is
- it
- iu
- ja
- jiv
- jv
- ka
- kaa
- kbd
- kbp
- kde
- kdh
- kea
- kek
- kg
- kha
- kjh
- kk
- kkh
- kl
- km
- kmb
- kn
- ko
- koi
- koo
- kqn
- kqs
- kr
- kri
- krl
- ktu
- ku
- kwi
- ky
- la
- lad
- lah
- lb
- lg
- lia
- lij
- lld
- ln
- lns
- lo
- lob
- lot
- loz
- lt
- lua
- lue
- lun
- lus
- lv
- mad
- mag
- mai
- mam
- man
- maz
- mcd
- mcf
- men
- mfq
- mg
- mh
- mi
- mic
- min
- miq
- mk
- ml
- mn
- mnw
- mor
- mos
- mr
- mt
- mto
- mxi
- mxv
- my
- mzi
- nan
- nb
- nba
- nds
- ne
- ng
- nhn
- nio
- niu
- niv
- njo
- nku
- nl
- nn
- not
- nr
- nso
- nv
- ny
- nym
- nyn
- nzi
- oaa
- oc
- ojb
- oki
- om
- orh
- os
- ote
- pa
- pam
- pap
- pau
- pbb
- pcd
- pcm
- pis
- piu
- pl
- pon
- pov
- ppl
- prq
- ps
- pt
- qu
- quc
- qug
- quh
- quy
- qva
- qvc
- qvh
- qvm
- qvn
- qwh
- qxn
- qxu
- rar
- rgn
- rm
- rmn
- rn
- ro
- ru
- rup
- rw
- sa
- sah
- sc
- sco
- se
- sey
- sg
- shk
- shn
- shp
- si
- sk
- skr
- sl
- slr
- sm
- sn
- snk
- snn
- so
- sr
- srr
- ss
- st
- su
- suk
- sus
- sv
- sw
- swb
- ta
- taj
- tbz
- tca
- tdt
- te
- tem
- tet
- tg
- th
- ti
- tiv
- tk
- tl
- tly
- tn
- to
- tob
- toi
- toj
- top
- tpi
- tr
- ts
- tsz
- tt
- tw
- ty
- tyv
- tzh
- tzm
- tzo
- udu
- ug
- uk
- umb
- und
- ur
- ura
- uz
- vai
- ve
- vec
- vep
- vi
- vmw
- wa
- war
- wo
- wuu
- wwa
- xh
- xsm
- yad
- yao
- yap
- yi
- ykg
- yo
- yrk
- yua
- yue
- za
- zam
- zdj
- zgh
- zh
- zlm
- zro
- ztu
- zu
language_bcp47:
- az-Cyrl
- az-Latn
- bs-Cyrl
- bs-Latn
- ckb-Latn
- de-1901
- de-1996
- el-monoton
- el-polyton
- fa-AF
- fuf-Adlm
- ha-NE
- ha-NG
- jv-Java
- kg-AO
- kkh-Lana
- mn-Cyrl
- pt-BR
- pt-PT
- rm-puter
- rm-rumgr
- rm-surmiran
- rm-sursilv
- rm-sutsilv
- rm-vallader
- sa-Gran
- sr-Cyrl
- sr-Latn
- ta-LK
- tk-Cyrl
- tk-Latn
- tw-akuapem
- tw-asante
- ug-Arab
- ug-Latn
- uz-Cyrl
- uz-Latn
- vi-Hani
- zh-Hant
- zlm-Arab
- zlm-Latn
license:
- unknown
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: The Universal Declaration of Human Rights (UDHR)
dataset_info:
features:
- name: text
dtype: string
- name: lang_key
dtype: string
- name: lang_name
dtype: string
- name: iso639-3
dtype: string
- name: iso15924
dtype: string
- name: bcp47
dtype: string
splits:
- name: train
num_bytes: 6753383
num_examples: 488
download_size: 2389690
dataset_size: 6753383
---
# Dataset Card for The Universal Declaration of Human Rights (UDHR)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.ohchr.org/en/universal-declaration-of-human-rights, https://unicode.org/udhr/index.html
- **Repository:** https://github.com/unicode-org/udhr
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Universal Declaration of Human Rights (UDHR) is a milestone document in the history of human rights. Drafted by
representatives with different legal and cultural backgrounds from all regions of the world, it set out, for the
first time, fundamental human rights to be universally protected. The Declaration was adopted by the UN General
Assembly in Paris on 10 December 1948 during its 183rd plenary meeting.
© 1996 – 2009 The Office of the High Commissioner for Human Rights
This plain text version prepared by the "UDHR in Unicode" project, https://www.unicode.org/udhr.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset includes translations of the document in over 400 languages and dialects. The list of languages can be found
[here](https://unicode.org/udhr/translations.html).
## Dataset Structure
### Data Instances
Each instance corresponds to a different language and includes information about the language and the full document
text.
### Data Fields
- `text`: The full document text with each line of text delimited by a newline (`\n`).
- `lang_key`: The unique identifier of a given translation.
- `lang_name`: The textual description of language/dialect.
- `iso639-3`: The [iso639-3](https://iso639-3.sil.org/) language identifier.
- `iso15924`: The [iso15924](https://unicode.org/iso15924/iso15924-codes.html) language identifier.
- `bcp47`: The [BCP 47](https://www.rfc-editor.org/info/bcp47) language identifier.
### Data Splits
Only a `train` split included which includes the full document in all languages.
| | train |
|--------------------|------:|
| Number of examples | 488 |
## Dataset Creation
### Curation Rationale
In addition to its social significance, the document set a world record in 1999 for being the most translated
document in the world and as such can be useful for settings requiring paired text between many languages.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
In addition to the social and political significance of the United Nations' Universal Declaration of Human Rights,
the document set a world record in 1999 for being the most translated document in the world and as such can be useful
for settings requiring paired text between many languages including those that are low resource and significantly
underrepresented in NLP research.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Although the document is translated into a very large number of languages, the text is very short and therefore may
have limited usefulness for most types of modeling and evaluation.
## Additional Information
### Dataset Curators
The txt/xml data files used here were compiled by The Unicode Consortium, which can be found
[here](https://unicode.org/udhr/index.html). The original texts can be found on the
[United Nations website](https://www.ohchr.org/EN/UDHR/Pages/UDHRIndex.aspx).
### Licensing Information
Source text © 1996 – 2022 The Office of the High Commissioner for Human Rights
The [Unicode license](https://www.unicode.org/license.txt) applies to these translations.
### Citation Information
United Nations. (1998). The Universal Declaration of Human Rights, 1948-1998. New York: United Nations Dept. of Public Information.
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. Updated May 2022 [@leondz](https://github.com/leondz). |
Doohae/klue-mrc-bm25 | 2022-02-09T08:10:52.000Z | [
"region:us"
] | Doohae | null | null | null | 0 | 10 | Entry not found |
GEM/wiki_auto_asset_turk | 2022-10-24T15:31:10.000Z | [
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:crowd-sourced",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1910.02677",
"arxiv:2005.00352",
"... | GEM | WikiAuto provides a set of aligned sentences from English Wikipedia and Simple
English Wikipedia as a resource to train sentence simplification systems.
The authors first crowd-sourced a set of manual alignments between sentences in
a subset of the Simple English Wikipedia and their corresponding versions in
English Wikipedia (this corresponds to the manual config in this version of the
dataset), then trained a neural CRF system to predict these alignments.
The trained alignment prediction model was then applied to the other articles in
Simple English Wikipedia with an English counterpart to create a larger corpus
of aligned sentences (corresponding to the auto and auto_acl configs here). | @inproceedings{jiang-etal-2020-neural,
title = "Neural {CRF} Model for Sentence Alignment in Text Simplification",
author = "Jiang, Chao and
Maddela, Mounica and
Lan, Wuwei and
Zhong, Yang and
Xu, Wei",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.709",
doi = "10.18653/v1/2020.acl-main.709",
pages = "7943--7960",
} | null | 3 | 10 | ---
annotations_creators:
- crowd-sourced
language_creators:
- unknown
language:
- en
license:
- other
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- text-simplification
pretty_name: wiki_auto_asset_turk
---
# Dataset Card for GEM/wiki_auto_asset_turk
## Dataset Description
- **Homepage:** n/a
- **Repository:** https://github.com/chaojiang06/wiki-auto, [ASSET repository
- **Paper:** https://aclanthology.org/2020.acl-main.709/, [ASSET
- **Leaderboard:** N/A
- **Point of Contact:** WikiAuto: Chao Jiang; ASSET: Fernando Alva-Manchego and Louis Martin; TURK: Wei Xu
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/wiki_auto_asset_turk).
### Dataset Summary
WikiAuto is an English simplification dataset that we paired with ASSET and TURK, two very high-quality evaluation datasets, as test sets. The input is an English sentence taken from Wikipedia and the target a simplified sentence. ASSET and TURK contain the same test examples but have references that are simplified in different ways (splitting sentences vs. rewriting and splitting).
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/wiki_auto_asset_turk')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/wiki_auto_asset_turk).
#### website
n/a
#### paper
[WikiAuto](https://aclanthology.org/2020.acl-main.709/), [ASSET](https://aclanthology.org/2020.acl-main.424/), [TURK](https://aclanthology.org/Q16-1029/)
#### authors
WikiAuto: Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, Wei Xu; ASSET: Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, and Benoîıt Sagot, and Lucia Specia; TURK: Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch
## Dataset Overview
### Where to find the Data and its Documentation
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Wiki-Auto repository](https://github.com/chaojiang06/wiki-auto), [ASSET repository](https://github.com/facebookresearch/asset), [TURKCorpus](https://github.com/cocoxu/simplification)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[WikiAuto](https://aclanthology.org/2020.acl-main.709/), [ASSET](https://aclanthology.org/2020.acl-main.424/), [TURK](https://aclanthology.org/Q16-1029/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
WikiAuto:
```
@inproceedings{jiang-etal-2020-neural,
title = "Neural {CRF} Model for Sentence Alignment in Text Simplification",
author = "Jiang, Chao and
Maddela, Mounica and
Lan, Wuwei and
Zhong, Yang and
Xu, Wei",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.709",
doi = "10.18653/v1/2020.acl-main.709",
pages = "7943--7960",
}
```
ASSET:
```
@inproceedings{alva-manchego-etal-2020-asset,
title = "{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations",
author = "Alva-Manchego, Fernando and
Martin, Louis and
Bordes, Antoine and
Scarton, Carolina and
Sagot, Beno{\^\i}t and
Specia, Lucia",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.424",
pages = "4668--4679",
}
```
TURK:
```
@article{Xu-EtAl:2016:TACL,
author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch},
title = {Optimizing Statistical Machine Translation for Text Simplification},
journal = {Transactions of the Association for Computational Linguistics},
volume = {4},
year = {2016},
url = {https://cocoxu.github.io/publications/tacl2016-smt-simplification.pdf},
pages = {401--415}
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
WikiAuto: Chao Jiang; ASSET: Fernando Alva-Manchego and Louis Martin; TURK: Wei Xu
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
jiang.1530@osu.edu, f.alva@sheffield.ac.uk, louismartincs@gmail.com, wei.xu@cc.gatech.edu
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
Wiki-Auto contains English text only (BCP-47: `en`). It is presented as a translation task where Wikipedia Simple English is treated as its own idiom. For a statement of what is intended (but not always observed) to constitute Simple English on this platform, see [Simple English in Wikipedia](https://simple.wikipedia.org/wiki/Wikipedia:About#Simple_English).
Both ASSET and TURK use crowdsourcing to change references, and their language is thus a combination of the WikiAuto data and the language of the demographic on mechanical Turk
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
other: Other license
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems.
The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the `manual` config in this version of the dataset), then trained a neural CRF system to predict these alignments.
The trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here).
[ASSET](https://github.com/facebookresearch/asset) [(Alva-Manchego et al., 2020)](https://www.aclweb.org/anthology/2020.acl-main.424.pdf) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from [TurkCorpus](https://github.com/cocoxu/simplification/) [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence
splitting in [HSplit](https://www.aclweb.org/anthology/D18-1081.pdf)), the simplifications in ASSET encompass a variety of rewriting transformations.
TURKCorpus is a high quality simplification dataset where each source (not simple) sentence is associated with 8 human-written simplifications that focus on lexical paraphrasing. It is one of the two evaluation datasets for the text simplification task in GEM. It acts as the validation and test set for paraphrasing-based simplification that does not involve sentence splitting and deletion.
#### Add. License Info
<!-- info: What is the 'other' license of the dataset? -->
<!-- scope: periscope -->
WikiAuto: `CC BY-NC 3.0`, ASSET: `CC BY-NC 4.0`, TURK: `GNU General Public License v3.0`
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Simplification
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
The goal is to communicate the main ideas of source sentence in a way that is easier to understand by non-native speakers of English.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`, `industry`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Ohio State University, University of Sheffield, Inria, Facebook AI Research, Imperial College London, University of Pennsylvania, John Hopkins University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
WikiAuto: Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, Wei Xu; ASSET: Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, and Benoîıt Sagot, and Lucia Specia; TURK: Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
WikiAuto: NSF, ODNI, IARPA, Figure Eight AI, and Criteo. ASSET: PRAIRIE Institute, ANR. TURK: NSF
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
GEM v1 had separate data cards for WikiAuto, ASSET, and TURK. They were contributed by Dhruv Kumar and Mounica Maddela. The initial data loader was written by Yacine Jernite. Sebastian Gehrmann merged and extended the data cards and migrated the loader to the v2 infrastructure.
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `source`: A source sentence from one of the datasets
- `target`: A single simplified sentence corresponding to `source`
- `references`: In the case of ASSET/TURK, references is a list of strings corresponding to the different references.
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The underlying datasets have extensive secondary annotations that can be used in conjunction with the GEM version. We omit those annotations to simplify the format into one that can be used by seq2seq models.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
'source': 'In early work, Rutherford discovered the concept of radioactive half-life , the radioactive element radon, and differentiated and named alpha and beta radiation .',
'target': 'Rutherford discovered the radioactive half-life, and the three parts of radiation which he named Alpha, Beta, and Gamma.'
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
In WikiAuto, which is used as training and validation set, the following splits are provided:
| | Tain | Dev | Test |
| ----- | ------ | ----- | ---- |
| Total sentence pairs | 373801 | 73249 | 118074 |
| Aligned sentence pairs | 1889 | 346 | 677 |
ASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training. For GEM, [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) will be used for training the model.
Each input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.
| | Dev | Test | Total |
| ----- | ------ | ---- | ----- |
| Input Sentences | 2000 | 359 | 2359 |
| Reference Simplifications | 20000 | 3590 | 23590 |
The test and validation sets are the same as those of [TurkCorpus](https://github.com/cocoxu/simplification/). The split was random.
There are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.
TURKCorpus does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) or [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) (Jiang et. al 2020) for training.
Each input sentence has 8 associated reference simplified sentences. 2,359 input sentences are randomly split into 2,000 validation and 359 test sentences.
| | Dev | Test | Total |
| ----- | ------ | ---- | ----- |
| Input Sentences | 2000 | 359 | 2359 |
| Reference Simplifications | 16000 | 2872 | 18872 |
There are 21.29 tokens per reference on average.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
In our setup, we use WikiAuto as training/validation corpus and ASSET and TURK as test corpora. ASSET and TURK have the same inputs but differ in their reference style. Researchers can thus conduct targeted evaluations based on the strategies that a model should learn.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
WikiAuto is the largest open text simplification dataset currently available. ASSET and TURK are high quality test sets that are compatible with WikiAuto.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
no
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
It's unique setup with multiple test sets makes the task interesting since it allows for evaluation of multiple generations and systems that simplify in different ways.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
simplification
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
We removed secondary annotations and focus on the simple `input->output` format, but combine the different sub-datasets.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
yes
#### Split Information
<!-- info: Describe how the new splits were created -->
<!-- scope: periscope -->
we split the original test set according to syntactic complexity of the source sentences. To characterize sentence syntactic complexity, we use the 8-level developmental level (d-level) scale proposed by [Covington et al. (2006)](https://www.researchgate.net/publication/254033869_How_complex_is_that_sentence_A_proposed_revision_of_the_Rosenberg_and_Abbeduto_D-Level_Scale) and the implementation of [Lu, Xiaofei (2010)](https://www.jbe-platform.com/content/journals/10.1075/ijcl.15.4.02lu).
We thus split the original test set into 8 subsets corresponding to the 8 d-levels assigned to source sentences. We obtain the following number of instances per level and average d-level of the dataset:
| Total nb. sentences | L0 | L1 | L2 | L3 | L4 | L5 | L6 | L7 | Mean Level |
|-------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- |
| 359 | 166 | 0 | 58 | 32 | 5 | 28 | 7 | 63 | 2.38 |
#### Split Motivation
<!-- info: What aspects of the model's generation capacities were the splits created to test? -->
<!-- scope: periscope -->
The goal was to assess performance when simplifying source sentences with different syntactic structure and complexity.
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
There are recent supervised ([Martin et al., 2019](https://arxiv.org/abs/1910.02677), [Kriz et al., 2019](https://www.aclweb.org/anthology/N19-1317/), [Dong et al., 2019](https://www.aclweb.org/anthology/P19-1331/), [Zhang and Lapata, 2017](https://www.aclweb.org/anthology/D17-1062/)) and unsupervised ([Martin et al., 2020](https://arxiv.org/abs/2005.00352v1), [Kumar et al., 2020](https://www.aclweb.org/anthology/2020.acl-main.707/), [Surya et al., 2019](https://www.aclweb.org/anthology/P19-1198/)) text simplification models that can be used as baselines.
#### Technical Terms
<!-- info: Technical terms used in this card and the dataset and their definitions -->
<!-- scope: microscope -->
The common metric used for automatic evaluation is SARI [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029/).
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Simplification
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`Other: Other Metrics`, `BLEU`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
SARI: A simplification metric that considers both input and references to measure the "goodness" of words that are added, deleted, and kept.
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The original authors of WikiAuto and ASSET used human evaluation to assess the fluency, adequacy, and simplicity (details provided in the paper). For TURK, the authors measured grammaticality, meaning-preservation, and simplicity gain (details in the paper).
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
Wiki-Auto provides a new version of the Wikipedia corpus that is larger, contains 75% less defective pairs and has more complex rewrites than the previous WIKILARGE dataset.
ASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the [TurkCorpus](https://github.com/cocoxu/simplification/) dataset from [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). The 2,359 input sentences of TurkCorpus are a sample of "standard" (not simple) sentences from the [Parallel Wikipedia Simplification (PWKP)](https://www.informatik.tu-darmstadt.de/ukp/research_6/data/sentence_simplification/simple_complex_sentence_pairs/index.en.jsp) dataset [(Zhu et al., 2010)](https://www.aclweb.org/anthology/C10-1152.pdf), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). No further information is provided on the sampling strategy.
The TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit [(Sulem et al., 2018)](https://www.aclweb.org/anthology/D18-1081.pdf), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence.
An example illustrating the differences between TurkCorpus, HSplit and ASSET is given below:
> **Original:** He settled in London, devoting himself chiefly to practical teaching.
>
> **TurkCorpus:** He rooted in London, devoting himself mainly to practical teaching.
>
> **HSplit:** He settled in London. He devoted himself chiefly to practical teaching.
>
> **ASSET:** He lived in London. He was a teacher.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The goal is to communicate the same information as the source sentence using simpler words and grammar.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
yes
#### Source Details
<!-- info: List the sources (one per line) -->
<!-- scope: periscope -->
Wikipedia
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The dataset uses language from Wikipedia: some demographic information is provided [here](https://en.wikipedia.org/wiki/Wikipedia:Who_writes_Wikipedia%3F).
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
algorithmically
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
The authors mention that they "extracted 138,095 article pairs from the 2019/09 Wikipedia dump using an improved version of the [WikiExtractor](https://github.com/attardi/wikiextractor) library". The [SpaCy](https://spacy.io/) library is used for sentence splitting.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
crowd-sourced
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
11<n<50
#### Rater Qualifications
<!-- info: Describe the qualifications required of an annotator. -->
<!-- scope: periscope -->
WikiAuto (Figure Eight): No information provided.
ASSET (MTurk):
- Having a HIT approval rate over 95%, and over 1000 HITs approved. No other demographic or compensation information is provided.
- Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test.
- Being a resident of the United States, United Kingdom or Canada.
TURK (MTurk):
- Reference sentences were written by workers with HIT approval rate over 95%. No other demographic or compensation information is provided.
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
1
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
>5
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
yes
#### Which Annotation Service
<!-- info: Which annotation services were used? -->
<!-- scope: periscope -->
`Amazon Mechanical Turk`, `Appen`
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
WikiAuto: Sentence alignment labels were crowdsourced for 500 randomly sampled document pairs (10,123 sentence pairs total). The authors pre-selected several alignment candidates from English Wikipedia for each Simple Wikipedia sentence based on various similarity metrics, then asked the crowd-workers to annotate these pairs. Finally, they trained their alignment model on this manually annotated dataset to obtain automatically aligned sentences (138,095 document pairs, 488,332 sentence pairs).
No demographic annotation is provided for the crowd workers. The [Figure Eight](https://www.figure-eight.com/) platform now part of Appen) was used for the annotation process.
ASSET: The instructions given to the annotators are available [here](https://github.com/facebookresearch/asset/blob/master/crowdsourcing/AMT_AnnotationInstructions.pdf).
TURK: The references are crowdsourced from Amazon Mechanical Turk. The annotators were asked to provide simplifications without losing any information or splitting the input sentence. No other demographic or compensation information is provided in the TURKCorpus paper. The instructions given to the annotators are available in the paper.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
none
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
Both Figure Eight and Amazon Mechanical Turk raters forfeit the right to their data as part of their agreements.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
Since the dataset is created from Wikipedia/Simple Wikipedia, all the information contained in the dataset is already in the public domain.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
#### Links and Summaries of Analysis Work
<!-- info: Provide links to and summaries of works analyzing these biases. -->
<!-- scope: microscope -->
The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases [(Schmahl et al., 2020)](https://research.tudelft.nl/en/publications/is-wikipedia-succeeding-in-reducing-gender-bias-assessing-changes) and racial biases [(Adams et al., 2019)](https://journals.sagepub.com/doi/pdf/10.1177/2378023118823946).
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
All the data is in the public domain.
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases [(Schmahl et al., 2020)](https://research.tudelft.nl/en/publications/is-wikipedia-succeeding-in-reducing-gender-bias-assessing-changes) and racial biases [(Adams et al., 2019)](https://journals.sagepub.com/doi/pdf/10.1177/2378023118823946).
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
Since the test datasets contains only 2,359 sentences that are derived from Wikipedia, they are limited to a small subset of topics present on Wikipedia.
|
LeverageX/klue-re | 2022-01-10T07:43:15.000Z | [
"region:us"
] | LeverageX | Klue Relation Extraction Data | null | null | 0 | 10 | Entry not found |
aminedjebbie/Multi-Arabic-dialects | 2022-02-10T20:28:50.000Z | [
"region:us"
] | aminedjebbie | null | null | null | 0 | 10 | Entry not found |
husnu/tquad-v1v2 | 2022-01-14T20:09:29.000Z | [
"region:us"
] | husnu | null | null | null | 0 | 10 | Entry not found |
lhoestq/conll2003 | 2021-12-21T11:23:57.000Z | [
"region:us"
] | lhoestq | null | null | null | 0 | 10 | Entry not found |
persiannlp/parsinlu_reading_comprehension | 2022-10-25T09:54:26.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|wikipedia|google",
"language:fa",
"license:cc-by-nc-sa-4.0",
"arxiv:20... | persiannlp | A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers. | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | null | 0 | 10 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|wikipedia|google
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for PersiNLU (Reading Comprehension)
## Table of Contents
- [Dataset Card for PersiNLU (Reading Comprehension)](#dataset-card-for-persi_nlu_reading_comprehension)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```
{
'question': 'پیامبر در چه سالی به پیامبری رسید؟',
'url': 'https://fa.wikipedia.org/wiki/%D9%85%D8%AD%D9%85%D8%AF',
'passage': 'محمد که از روش زندگی مردم مکه ناخشنود بود، گهگاه در غار حرا در یکی از کوه\u200cهای اطراف آن دیار به تفکر و عبادت می\u200cپرداخت. به باور مسلمانان، محمد در همین مکان و در حدود ۴۰ سالگی از طرف خدا به پیامبری برگزیده، و وحی بر او فروفرستاده شد. در نظر آنان، دعوت محمد همانند دعوت دیگر پیامبرانِ کیش یکتاپرستی مبنی بر این بود که خداوند (الله) یکتاست و تسلیم شدن برابر خدا راه رسیدن به اوست.',
'answers': [
{'answer_start': 160, 'answer_text': 'حدود ۴۰ سالگی'}
]
}
```
### Data Fields
- `question`: the question, mined using Google auto-complete.
- `passage`: the passage that contains the answer.
- `url`: the url from which the passage was mined.
- `answers`: a list of answers, containing the string and the index of the answer.
### Data Splits
The train/test split contains 600/575 samples.
## Dataset Creation
### Curation Rationale
The question were collected via Google auto-complete.
The answers were annotated by native speakers.
For more details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
unicamp-dl/mrobust | 2022-10-02T22:39:57.000Z | [
"arxiv:2108.13897",
"arxiv:2105.06813",
"arxiv:2209.13738",
"region:us"
] | unicamp-dl | Robust04 translated datasets | # @misc{bonifacio2021mmarco,
# title={mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset},
# author={Luiz Henrique Bonifacio and Israel Campiotti and Vitor Jeronymo and Hugo Queiroz Abonizio and Roberto Lotufo and Rodrigo Nogueira},
# year={2021},
# eprint={2108.13897},
# archivePrefix={arXiv},
# primaryClass={cs.CL}
# }
# | null | 1 | 10 | # Dataset Summary
**mRobust** is a multilingual version of the [TREC 2004 Robust passage ranking dataset](https://trec.nist.gov/data/robust/04.guidelines.html).
For more information, checkout our papers:
<!-- * [**mRobust: A Multilingual Version of the MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897)
* [**A cost-benefit analysis of cross-lingual transfer methods**](https://arxiv.org/abs/2105.06813) -->
The current version is composed 10 languages: Chinese, French, German, Indonesian, Italian, Portuguese, Russian, Spanish, Dutch and Vietnamese.
### Supported languages
| Language name | Language code |
|---------------|---------------|
| English | english |
| Chinese | chinese |
| French | french |
| German | german |
| Indonesian | indonesian |
| Italian | italian |
| Portuguese | portuguese |
| Russian | russian |
| Spanish | spanish |
| Dutch | dutch |
| Vietnamese | vietnamese |
# Dataset Structure
You can load mRobust dataset by choosing a specific language. We include the translated collections of documents and queries.
#### Queries
```python
>>> dataset = load_dataset('unicamp-dl/mrobust', 'queries-spanish')
>>> dataset['queries'][1]
{'id': '302', 'text': '¿Está controlada la enfermedad de la poliomielitis (polio) en el mundo?'}
```
#### Collection
```python
>>> dataset = load_dataset('unicamp-dl/mrobust', 'collection-portuguese')
>>> dataset['collection'][5]
{'id': 'FT931-16660', 'text': '930105 FT 05 JAN 93 / Cenelec: Correção O endereço do Cenelec, Comitê Europeu de Normalização Eletrotécnica, estava incorreto na edição de ontem. É Rue de Stassart 35, B-1050, Bruxelas, Tel (322) 519 6871. CEN, Comitê Europeu de Normalização, está localizado na Rue de Stassart 36, B-1050, Bruxelas, Tel 519 6811.'}
```
# Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2209.13738,
doi = {10.48550/ARXIV.2209.13738},
url = {https://arxiv.org/abs/2209.13738},
author = {Jeronymo, Vitor and Nascimento, Mauricio and Lotufo, Roberto and Nogueira, Rodrigo},
title = {mRobust04: A Multilingual Version of the TREC Robust 2004 Benchmark},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
warwickai/financial_phrasebank_mirror | 2022-01-17T00:19:04.000Z | [
"region:us"
] | warwickai | null | null | null | 0 | 10 | Entry not found |
openclimatefix/uk_pv | 2022-11-30T17:02:42.000Z | [
"task_categories:time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:en",
"license:mit",
"pv",
... | openclimatefix | # UK PV dataset
PV solar generation data from the UK.
This dataset contains dataa from 1311 PV systems from 2018-01-01 to 2021-10-27.
The time series of solar generation is in 5 minutes chunks.
This data is from collected from live PV systems in the UK. We have obfuscated the location of the pv systems for privacy.
If you are the owner of a PV system in the dataset, and do not want this data to be shared,
please do get in contact with info@openclimatefix.org.
## Files
The dataset contains two files
- metadata.csv: Data about the PV systems, e.g location
- pv.netcdf: Time series of PV solar generation
### metadata.csv
Metadata of the different PV systems.
Note that there are extra PV systems in this metadata that do not appear in the pv timeseries data
The csv columns are
- ss_id: the id of the system
- latitude_rounded: latitude of the pv system, but rounded to approximately the nearest km
- longitude_rounded: latitude of the pv system, but rounded to approximately the nearest km
- llsoacd: TODO
- orientation: The orientation of the pv system
- tilt: The tilt of the pv system
- kwp: The capacity of the pv system
- operational_at: the datetime the pv system started working
### pv.netcdf
Time series data of pv solar generation data is in a [xarray](https://docs.xarray.dev/en/stable/) format.
The data variables are the same as 'ss_id' in the metadata.
Each data variable contains the solar generation (in kw) for that pv system.
The ss_id's here are a subset of the all the ss_id's in the metadata
The co-ordinates of the date are 'datetime' which is the datetime of the solar generation reading. | @InProceedings{uk_pv,
title = {UK PV solar generation dataset},
author={Open Climate Fix.
},
year={2022}
} | null | 6 | 10 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: United Kingdom PV Solar generation
size_categories:
- 1B<n<10B
source_datasets:
- original
tags:
- pv
- photovoltaic
- environment
- climate
- energy
- electricity
task_categories:
- time-series-forecasting
task_ids:
- multivariate-time-series-forecasting
---
# UK PV dataset
PV solar generation data from the UK.
This dataset contains data from 1311 PV systems from 2018 to 2021.
Time granularity varies from 2 minutes to 30 minutes.
This data is collected from live PV systems in the UK. We have obfuscated the location of the PV systems for privacy.
If you are the owner of a PV system in the dataset, and do not want this data to be shared,
please do get in contact with info@openclimatefix.org.
## Files
- metadata.csv: Data about the PV systems, e.g location
- 2min.parquet: Power output for PV systems every 2 minutes.
- 5min.parquet: Power output for PV systems every 5 minutes.
- 30min.parquet: Power output for PV systems every 30 minutes.
- pv.netcdf: (legacy) Time series of PV solar generation every 5 minutes
### metadata.csv
Metadata of the different PV systems.
Note that there are extra PV systems in this metadata that do not appear in the PV time-series data.
The csv columns are:
- ss_id: the id of the system
- latitude_rounded: latitude of the PV system, but rounded to approximately the nearest km
- longitude_rounded: latitude of the PV system, but rounded to approximately the nearest km
- llsoacd: TODO
- orientation: The orientation of the PV system
- tilt: The tilt of the PV system
- kwp: The capacity of the PV system
- operational_at: the datetime the PV system started working
### {2,5,30}min.parquet
Time series of solar generation for a number of sytems.
Each file includes the systems for which there is enough granularity.
In particular the systems in 2min.parquet and 5min.parquet are also in 30min.parquet.
The files contain 3 columns:
- ss_id: the id of the system
- timestamp: the timestamp
- generation_wh: the generated power (in kW) at the given timestamp for the given system
### pv.netcdf (legacy)
Time series data of PV solar generation data is in an [xarray](https://docs.xarray.dev/en/stable/) format.
The data variables are the same as 'ss_id' in the metadata.
Each data variable contains the solar generation (in kW) for that PV system.
The ss_id's here are a subset of all the ss_id's in the metadata
The coordinates of the date are tagged as 'datetime' which is the datetime of the solar generation reading.
This is a subset of the more recent `5min.parquet` file.
## example
using Hugging Face Datasets
```python
from datasets import load_dataset
dataset = load_dataset("openclimatefix/uk_pv")
```
## useful links
https://huggingface.co/docs/datasets/share - this repo was made by following this tutorial |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.