id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
nRuaif/Pure-dove-sharegpt | 2023-10-01T14:01:10.000Z | [
"region:us"
] | nRuaif | null | null | null | 0 | 4 | Entry not found |
nikchar/retrieval_verification_bm25_bert | 2023-10-01T10:50:28.000Z | [
"region:us"
] | nikchar | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: claim
dtype: string
- name: evidence_wiki_url
dtype: string
- name: text
dtype: string
- name: retrieved_evidence_title
sequence: string
- name: retrieved_evidence_text
sequence: string
- name: labels
dtype: int64
- name: Retrieval_Success
dtype: bool
- name: Predicted_Labels
dtype: int64
- name: Predicted_Labels_Each_doc
sequence: int64
splits:
- name: train
num_bytes: 66031496
num_examples: 11073
download_size: 30811942
dataset_size: 66031496
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "retrieval_verification_bm25_bert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
prattay/abinbev | 2023-10-01T09:46:36.000Z | [
"license:apache-2.0",
"region:us"
] | prattay | null | null | null | 0 | 4 | ---
license: apache-2.0
---
|
learn3r/SDG_cs | 2023-10-01T11:45:46.000Z | [
"region:us"
] | learn3r | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: jargon
dtype: string
- name: definition
dtype: string
splits:
- name: train
num_bytes: 44588
num_examples: 200
download_size: 29080
dataset_size: 44588
---
# Dataset Card for "SDG_cs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wwwlir/langcain_docs_l1 | 2023-10-01T19:43:24.000Z | [
"region:us"
] | wwwlir | null | null | null | 0 | 4 | Entry not found |
rmanluo/RoG-webqsp | 2023-10-01T23:40:22.000Z | [
"region:us"
] | rmanluo | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
sequence: string
- name: q_entity
sequence: string
- name: a_entity
sequence: string
- name: graph
sequence:
sequence: string
- name: choices
sequence: 'null'
splits:
- name: train
num_bytes: 993540472
num_examples: 2826
- name: validation
num_bytes: 84009553
num_examples: 246
- name: test
num_bytes: 580788090
num_examples: 1628
download_size: 0
dataset_size: 1658338115
---
# Dataset Card for "RoG-webqsp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rmanluo/RoG-cwq | 2023-10-01T23:47:36.000Z | [
"region:us"
] | rmanluo | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
sequence: string
- name: q_entity
sequence: string
- name: a_entity
sequence: string
- name: graph
sequence:
sequence: string
- name: choices
sequence: 'null'
splits:
- name: train
num_bytes: 8890766478
num_examples: 27639
- name: validation
num_bytes: 1170336525
num_examples: 3519
- name: test
num_bytes: 1208452620
num_examples: 3531
download_size: 1993772283
dataset_size: 11269555623
---
# Dataset Card for "RoG-cwq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sathvik-24/HinGlishLaama2 | 2023-10-02T07:00:41.000Z | [
"region:us"
] | Sathvik-24 | null | null | null | 0 | 4 | Entry not found |
JuanKO/T5_summarization_RLAIF | 2023-10-02T14:57:00.000Z | [
"license:apache-2.0",
"region:us"
] | JuanKO | null | null | null | 0 | 4 | ---
license: apache-2.0
dataset_info:
features:
- name: prompt
dtype: string
- name: summary_1
dtype: string
- name: summary_2
dtype: string
splits:
- name: train
num_bytes: 1697095
num_examples: 1000
download_size: 906302
dataset_size: 1697095
---
|
pphuc25/uit_data_sample | 2023-10-02T16:54:52.000Z | [
"region:us"
] | pphuc25 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: claim
dtype: string
- name: verdict
dtype: string
- name: evidence
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 4167523
num_examples: 1000
download_size: 1991987
dataset_size: 4167523
---
# Dataset Card for "uit_data_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adamo1139/PS_AD_Office365_02 | 2023-10-03T00:13:00.000Z | [
"license:apache-2.0",
"region:us"
] | adamo1139 | null | null | null | 0 | 4 | ---
license: apache-2.0
---
Second version of the synthetic dataset created by putting a part of a textbook in the context of 7B model and then asking the model
to create a few questions and answers related to the dataset.
It contains information about PowerShell basics, Office 365 basics and Active Directory/GPO basics. |
ZhongshengWang/PARARULE-Plus-Alpaca | 2023-10-03T06:24:35.000Z | [
"license:mit",
"region:us"
] | ZhongshengWang | null | null | null | 0 | 4 | ---
license: mit
---
|
hanifabdlh/quac-lamini-instruction-indo-0k-10k | 2023-10-03T05:46:21.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 4177364
num_examples: 10000
download_size: 2408739
dataset_size: 4177364
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-lamini-instruction-indo-0k-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ouvic215/Soldering-Data-pix2pix-1001 | 2023-10-03T08:01:47.000Z | [
"region:us"
] | ouvic215 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: mask_image
dtype: image
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 961523307.5
num_examples: 12054
download_size: 960371764
dataset_size: 961523307.5
---
# Dataset Card for "Soldering-Data-pix2pix-1001"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nguyenthanhdo/viettel_v3.2 | 2023-10-03T08:52:34.000Z | [
"region:us"
] | nguyenthanhdo | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: translated
dtype: bool
- name: output_len
dtype: int64
- name: source
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 327564182.0
num_examples: 100000
download_size: 157982995
dataset_size: 327564182.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "viettel_v3.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ismailiismail/paraphrasing_french_5000 | 2023-10-03T19:47:58.000Z | [
"region:us"
] | ismailiismail | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: phrase
dtype: string
- name: paraphrase
dtype: string
splits:
- name: train
num_bytes: 1240685
num_examples: 4972
download_size: 499325
dataset_size: 1240685
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "paraphrasing_french_5000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Minglii/v_4096 | 2023-10-04T01:49:42.000Z | [
"region:us"
] | Minglii | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: data
struct:
- name: conversations
list:
- name: from
dtype: string
- name: markdown
struct:
- name: answer
dtype: string
- name: index
dtype: int64
- name: type
dtype: string
- name: text
dtype: string
- name: value
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 685122486
num_examples: 80129
download_size: 278043744
dataset_size: 685122486
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "v_4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Minglii/W_QthenA_4096 | 2023-10-04T01:51:58.000Z | [
"region:us"
] | Minglii | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: data
struct:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1576315785
num_examples: 143000
download_size: 537850801
dataset_size: 1576315785
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "W_QthenA_4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DamarJati/Face-Mask-Detection | 2023-10-04T06:34:17.000Z | [
"task_categories:image-classification",
"language:en",
"art",
"face mask",
"mask",
"region:us"
] | DamarJati | null | null | null | 0 | 4 | ---
language:
- en
pipeline_tag: image-classification
tags:
- art
- face mask
- mask
task_categories:
- image-classification
---
Original datasets https://www.kaggle.com/datasets/ashishjangra27/face-mask-12k-images-dataset |
lakelz/mydataset-bpg | 2023-10-07T06:54:47.000Z | [
"region:us"
] | lakelz | null | null | null | 0 | 4 | This dataset is a subset of the Open Assistant dataset, which you can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
This dataset was used to train Guanaco with QLoRA.
For further information, please see the original dataset.
License: Apache 2.0 |
Alexandre-Numind/TrainIE_grouped | 2023-10-04T08:53:58.000Z | [
"region:us"
] | Alexandre-Numind | null | null | null | 0 | 4 | Entry not found |
Alexandre-Numind/ValIE_grouped | 2023-10-04T08:54:38.000Z | [
"region:us"
] | Alexandre-Numind | null | null | null | 0 | 4 | Entry not found |
TheAIchemist13/hindi_asr_dataset | 2023-10-10T15:11:52.000Z | [
"region:us"
] | TheAIchemist13 | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcriptions
dtype: string
splits:
- name: train
num_bytes: 12211841.0
num_examples: 80
- name: test
num_bytes: 12211841.0
num_examples: 80
download_size: 24346804
dataset_size: 24423682.0
---
# Dataset Card for "hindi_asr_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AayushShah/SQL_PlainText_Combined | 2023-10-04T10:49:38.000Z | [
"region:us"
] | AayushShah | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 349116676.7610253
num_examples: 306706
- name: test
num_bytes: 38791374.23897472
num_examples: 34079
download_size: 98654951
dataset_size: 387908051.0
---
# Dataset Card for "SQL_PlainText_Combined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BlazeLlama/euclid_elements_eng | 2023-10-04T18:52:14.000Z | [
"license:apache-2.0",
"region:us"
] | BlazeLlama | null | null | null | 0 | 4 | ---
license: apache-2.0
---
|
finiteautomata/yahoo_dataset | 2023-10-04T18:34:26.000Z | [
"region:us"
] | finiteautomata | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int32
- name: topic
dtype:
class_label:
names:
'0': Society & Culture
'1': Science & Mathematics
'2': Health
'3': Education & Reference
'4': Computers & Internet
'5': Sports
'6': Business & Finance
'7': Entertainment & Music
'8': Family & Relationships
'9': Politics & Government
- name: question_title
dtype: string
- name: question_content
dtype: string
- name: best_answer
dtype: string
- name: question_title_embeddings
sequence: float32
- name: question_content_embeddings
sequence: float32
- name: best_answer_embeddings
sequence: float32
splits:
- name: train
num_bytes: 1032387680
num_examples: 200000
- name: test
num_bytes: 309853862
num_examples: 60000
download_size: 500190426
dataset_size: 1342241542
---
# Dataset Card for "yahoo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gorkaartola/ZS-train_S1-SDGdescriptions-AURORA1_S2-SDGdescriptions-SDGtitle_Negative_Sample_Filter-AURORA1 | 2023-10-04T20:07:43.000Z | [
"region:us"
] | gorkaartola | null | null | null | 0 | 4 | Entry not found |
angellist/cupcakeLPAParsingTest | 2023-10-05T06:03:51.000Z | [
"region:us"
] | angellist | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1483910
num_examples: 1138
download_size: 448368
dataset_size: 1483910
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cupcakeLPAParsingTest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sugeun/231005 | 2023-10-05T04:20:31.000Z | [
"region:us"
] | sugeun | null | null | null | 0 | 4 | Entry not found |
ManeAI31416/NASA_fine-tuning | 2023-10-05T05:36:01.000Z | [
"license:llama2",
"region:us"
] | ManeAI31416 | null | null | null | 0 | 4 | ---
license: llama2
---
|
hanifabdlh/quac-lamini-instruction-indo-30k-40k | 2023-10-05T06:19:20.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 4126187
num_examples: 10000
download_size: 2378575
dataset_size: 4126187
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-lamini-instruction-indo-30k-40k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jfrei/GPTNERMED | 2023-10-08T22:05:18.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"bio",
"biomedical",
"medical",
"c... | jfrei | GPTNERMED is a novel open synthesized dataset and neural named-entity-recognition (NER) model for German texts in medical natural language processing (NLP). | @article{FREI2023104478,
title = {Annotated dataset creation through large language models for non-english medical NLP},
journal = {Journal of Biomedical Informatics},
volume = {145},
pages = {104478},
year = {2023},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2023.104478},
url = {https://www.sciencedirect.com/science/article/pii/S1532046423001995},
author = {Johann Frei and Frank Kramer},
keywords = {Natural language processing, Information extraction, Named entity recognition, Data augmentation, Knowledge distillation, Medication detection},
abstract = {Obtaining text datasets with semantic annotations is an effortful process, yet crucial for supervised training in natural language processing (NLP). In general, developing and applying new NLP pipelines in domain-specific contexts for tasks often requires custom-designed datasets to address NLP tasks in a supervised machine learning fashion. When operating in non-English languages for medical data processing, this exposes several minor and major, interconnected problems such as the lack of task-matching datasets as well as task-specific pre-trained models. In our work, we suggest to leverage pre-trained large language models for training data acquisition in order to retrieve sufficiently large datasets for training smaller and more efficient models for use-case-specific tasks. To demonstrate the effectiveness of your approach, we create a custom dataset that we use to train a medical NER model for German texts, GPTNERMED, yet our method remains language-independent in principle. Our obtained dataset as well as our pre-trained models are publicly available at https://github.com/frankkramer-lab/GPTNERMED.}
} | null | 0 | 4 | ---
annotations_creators:
- machine-generated
language:
- de
language_creators:
- machine-generated
license: []
multilinguality:
- monolingual
pretty_name: GPTNERMED
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- bio
- biomedical
- medical
- clinical
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# GPTNERMED Dataset for German medical NER entities
## Dataset Description
- **Repository:** https://github.com/frankkramer-lab/GPTNERMED
- **Paper:** https://doi.org/10.1016/j.jbi.2023.104478
- **ArXiv-Preprint:** https://arxiv.org/abs/2208.14493
## Dataset Summary
This dataset contains the synthetic German sentences with annotated entities (`Medikation`, `Dosis`, `Diagnose`) from the GPTNERMED project.
The sentences as well as the annotations are **not** manually validated by medical professionals and therefore this dataset is **no** gold standard dataset.
The dataset consists of 9,845 sentences (121,027 tokens by SpaCy Tokenizer, 245,107 tokens by the GPT tokenizer) with the following labels:
| Label | Count | #Tokens (SpaCy) |
| --- | --- | -- |
| Medikation | 9868 | 10138 |
| Dosis | 7547 | 15845 |
| Diagnose | 5996 | 7656 |
## Dataset Structure
The train/test/dev-split (80%, 10%, 10%) of the data loader is as follows:\
`<-- train: 0.8 --><-- test: 0.1 --><-- dev: 0.1 -->`\
The splits are selected arbitrarily as the dataloader requires a split configuration. All sample sentences are however homogeneous in origin and splits could also be performed otherwise.
Every sample is a sentence with its text (property `sentence`) and its corresponding NER labels (property `ner_labels` / List of labels).\
Every NER label entry has a char-wise start and stop index (property `start`, `stop`) and a label class (property `ner_class`).
### Citation Information
If you like our work, cite our paper and give us a star on GitHub.\
(See the links above)
|
Sathvik-24/chachadata | 2023-10-05T13:03:04.000Z | [
"region:us"
] | Sathvik-24 | null | null | null | 0 | 4 | Entry not found |
magnus42/test_train_hey | 2023-10-05T17:03:53.000Z | [
"region:us"
] | magnus42 | null | null | null | 0 | 4 | Entry not found |
ninja/arabic-english-translation | 2023-10-05T17:07:41.000Z | [
"region:us"
] | ninja | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: arabic
dtype: string
- name: english
dtype: string
splits:
- name: train
num_bytes: 228876.54205607477
num_examples: 674
- name: test
num_bytes: 25468.457943925234
num_examples: 75
download_size: 159571
dataset_size: 254345.0
---
# Dataset Card for "arabic-english-translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MasterBruce1/test1 | 2023-10-05T18:56:59.000Z | [
"region:us"
] | MasterBruce1 | null | null | null | 0 | 4 | Entry not found |
joey234/sst2_affix_pos | 2023-10-05T23:45:33.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: idx
dtype: int32
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: words_with_affixes
sequence: string
splits:
- name: validation
num_bytes: 9357
num_examples: 58
download_size: 9664
dataset_size: 9357
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "sst2_affix_pos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ayan1988/diffusion.2.textual_inversion | 2023-10-06T06:51:48.000Z | [
"region:us"
] | ayan1988 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 1740639.0
num_examples: 6
download_size: 0
dataset_size: 1740639.0
---
# Dataset Card for "diffusion.2.textual_inversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/ali_prompts | 2023-10-06T07:36:22.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 367063
num_examples: 1000
download_size: 19378
dataset_size: 367063
---
# Dataset Card for "ali_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ikiransuryavanshi/llama_training3 | 2023-10-06T07:39:00.000Z | [
"region:us"
] | ikiransuryavanshi | null | null | null | 0 | 4 | Entry not found |
lafnac/sl-dataset | 2023-10-06T10:09:07.000Z | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:ar",
"license:afl-3.0",
"region:us"
] | lafnac | null | null | null | 0 | 4 | ---
license: afl-3.0
task_categories:
- text-classification
language:
- ar
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
dieineb/sartaj | 2023-10-06T11:58:25.000Z | [
"region:us"
] | dieineb | null | null | null | 0 | 4 | Entry not found |
HamdanXI/difference_analysis_data_structure | 2023-10-06T12:21:19.000Z | [
"region:us"
] | HamdanXI | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: en_toxic_comment
dtype: string
- name: en_neutral_comment
dtype: string
- name: edit_ops
sequence:
sequence: string
splits:
- name: train
num_bytes: 4067285
num_examples: 19744
download_size: 1996316
dataset_size: 4067285
---
# Dataset Card for "difference_analysis_data_structure"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
muhammadravi251001/indonesian-nli-and-qa | 2023-10-06T14:09:01.000Z | [
"license:mit",
"region:us"
] | muhammadravi251001 | null | null | null | 0 | 4 | ---
license: mit
---
|
ContextualAI/nq_open | 2023-10-07T00:34:08.000Z | [
"region:us"
] | ContextualAI | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: query
dtype: string
- name: gold_generation
sequence: string
splits:
- name: train
num_bytes: 5990520
num_examples: 79168
- name: dev
num_bytes: 660716
num_examples: 8757
- name: test
num_bytes: 313829
num_examples: 3610
download_size: 4681299
dataset_size: 6965065
---
# Dataset Card for "nq_open"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vsarathy/nl-robotics-semantic-parsing-info_structure-2k-no-context-TEST | 2023-10-07T12:31:44.000Z | [
"region:us"
] | vsarathy | null | null | null | 0 | 4 | Entry not found |
syaoran312/VHAC_QA_full | 2023-10-07T19:51:18.000Z | [
"region:us"
] | syaoran312 | null | null | null | 0 | 4 | Entry not found |
rongrong77/ADL_HW1 | 2023-10-08T06:52:58.000Z | [
"region:us"
] | rongrong77 | null | null | null | 0 | 4 | Entry not found |
tyzhu/synpre_union_1M | 2023-10-08T09:18:54.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 1167868421
num_examples: 1000000
- name: validation
num_bytes: 11660114
num_examples: 10000
download_size: 788391948
dataset_size: 1179528535
---
# Dataset Card for "synpre_union_1M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MikuHH/stagop | 2023-10-08T13:14:25.000Z | [
"region:us"
] | MikuHH | null | null | null | 0 | 4 | Entry not found |
Dmkond/tune-forms | 2023-10-08T15:44:24.000Z | [
"region:us"
] | Dmkond | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 842248
num_examples: 200
download_size: 221015
dataset_size: 842248
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tune-forms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ayansk11/llama2_merged_file | 2023-10-08T17:15:08.000Z | [
"region:us"
] | Ayansk11 | null | null | null | 0 | 4 | Entry not found |
gayanin/legal-es-masked | 2023-10-08T21:56:11.000Z | [
"region:us"
] | gayanin | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: original_sent
dtype: string
- name: masked_sent
dtype: string
splits:
- name: train
num_bytes: 14319276284
num_examples: 48833571
- name: test
num_bytes: 2144523252
num_examples: 6104196
- name: validation
num_bytes: 2169841655
num_examples: 6104197
download_size: 8287754892
dataset_size: 18633641191
---
# Dataset Card for "legal-es-masked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minh21/COVID-QA-Chunk-64-question-answering-biencoder-data-90_10 | 2023-10-09T04:29:31.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 78943266
num_examples: 1631
- name: validation
num_bytes: 8529659
num_examples: 185
download_size: 14143196
dataset_size: 87472925
---
# Dataset Card for "COVID-QA-Chunk-64-question-answering-biencoder-data-90_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kelzla/ds_test1 | 2023-10-09T05:35:13.000Z | [
"region:us"
] | kelzla | null | null | null | 0 | 4 | Entry not found |
krthk/kapardhi_dataset | 2023-10-09T06:45:45.000Z | [
"region:us"
] | krthk | null | null | null | 0 | 4 | Entry not found |
Back-up/validation_data_T5 | 2023-10-09T06:58:57.000Z | [
"region:us"
] | Back-up | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 338661368
num_examples: 31984
download_size: 43689455
dataset_size: 338661368
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "validation_data_T5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
giovanni92/MailFuncData | 2023-10-09T08:00:32.000Z | [
"license:mit",
"region:us"
] | giovanni92 | null | null | null | 0 | 4 | ---
license: mit
---
|
Colin23189/kaggle-exam-llm | 2023-10-09T08:34:21.000Z | [
"region:us"
] | Colin23189 | null | null | null | 0 | 4 | Entry not found |
kolkata97/pellm0-zancanaro-split | 2023-10-09T12:43:38.000Z | [
"region:us"
] | kolkata97 | null | null | null | 0 | 4 | Entry not found |
tonywu71/PokemonCards_fixed | 2023-10-09T13:11:31.000Z | [
"license:mit",
"region:us"
] | tonywu71 | null | null | null | 0 | 4 | ---
license: mit
dataset_info:
features:
- name: id
dtype: string
- name: image_url
dtype: string
- name: caption
dtype: string
- name: name
dtype: string
- name: hp
dtype: int64
- name: set_name
dtype: string
splits:
- name: train
num_bytes: 9474973.87624629
num_examples: 13088
download_size: 3028812
dataset_size: 9474973.87624629
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dmrau/cqadupstack-android-qrels | 2023-10-09T12:39:31.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 43411
num_examples: 1696
download_size: 0
dataset_size: 43411
---
# Dataset Card for "cqadupstack-android-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dmrau/cqadupstack-gaming | 2023-10-09T12:39:43.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 105494
num_examples: 1595
- name: corpus
num_bytes: 20666596
num_examples: 45301
download_size: 12946080
dataset_size: 20772090
---
# Dataset Card for "cqadupstack-gaming"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dmrau/cqadupstack-webmasters | 2023-10-09T12:41:03.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 34792
num_examples: 506
- name: corpus
num_bytes: 11659413
num_examples: 17405
download_size: 6885106
dataset_size: 11694205
---
# Dataset Card for "cqadupstack-webmasters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dmrau/cqadupstack-english | 2023-10-09T12:41:18.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 103588
num_examples: 1570
- name: corpus
num_bytes: 18199570
num_examples: 40221
download_size: 11382247
dataset_size: 18303158
---
# Dataset Card for "cqadupstack-english"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dmrau/cqadupstack-unix | 2023-10-09T12:42:00.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 72357
num_examples: 1072
- name: corpus
num_bytes: 46102756
num_examples: 47382
download_size: 24571026
dataset_size: 46175113
---
# Dataset Card for "cqadupstack-unix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dmrau/cqadupstack-wordpress | 2023-10-09T12:42:09.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 35736
num_examples: 541
- name: corpus
num_bytes: 53026140
num_examples: 48605
download_size: 26551471
dataset_size: 53061876
---
# Dataset Card for "cqadupstack-wordpress"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
berardi6/LBcmopcenscaspnewwsx4 | 2023-10-09T12:44:50.000Z | [
"region:us"
] | berardi6 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 565921
num_examples: 1788
download_size: 180294
dataset_size: 565921
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "LBcmopcenscaspnewwsx4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Amarjitkr/medical | 2023-10-09T17:46:33.000Z | [
"license:apache-2.0",
"region:us"
] | Amarjitkr | null | null | null | 0 | 4 | ---
license: apache-2.0
---
|
amphora/lmsys-filtered | 2023-10-09T17:57:19.000Z | [
"region:us"
] | amphora | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: conversation_id
dtype: string
- name: model
dtype: string
- name: conversation
dtype: string
- name: turn
dtype: int64
- name: language
dtype: string
- name: openai_moderation
dtype: string
- name: redacted
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 317822351
num_examples: 62968
download_size: 122101594
dataset_size: 317822351
---
# Dataset Card for "lmsys-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rcherukuri14/science-qa-instructions | 2023-10-09T21:46:14.000Z | [
"region:us"
] | rcherukuri14 | null | null | null | 0 | 4 | Entry not found |
promptora11/train | 2023-10-10T04:28:14.000Z | [
"region:us"
] | promptora11 | null | null | null | 0 | 4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 90417.6
num_examples: 108
- name: test
num_bytes: 10046.4
num_examples: 12
download_size: 16905
dataset_size: 100464.0
---
# Dataset Card for "train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/invention_prompts | 2023-10-10T05:47:15.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 96461
num_examples: 1000
download_size: 2138
dataset_size: 96461
---
# Dataset Card for "invention_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Luciya/llama-2-nuv-intent-noE-pp | 2023-10-10T05:58:08.000Z | [
"region:us"
] | Luciya | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 791845
num_examples: 1585
download_size: 111893
dataset_size: 791845
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-nuv-intent-noE-pp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jagadeesh-ti/sql-v4 | 2023-10-10T06:00:29.000Z | [
"region:us"
] | Jagadeesh-ti | null | null | null | 0 | 4 | Entry not found |
Luciya/llama-2-nuv-intent-noE-pp-oos | 2023-10-10T06:50:06.000Z | [
"region:us"
] | Luciya | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 921669
num_examples: 1834
download_size: 134964
dataset_size: 921669
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-nuv-intent-noE-pp-oos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Luciya/llama-2-nuv-intent-noE-oos | 2023-10-10T06:50:18.000Z | [
"region:us"
] | Luciya | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 828135
num_examples: 1834
download_size: 127293
dataset_size: 828135
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-nuv-intent-noE-oos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ilyas3141/ilias_test20 | 2023-10-10T08:41:24.000Z | [
"region:us"
] | ilyas3141 | null | null | null | 0 | 4 | Entry not found |
Oscaraandersson/testrag | 2023-10-10T09:13:42.000Z | [
"region:us"
] | Oscaraandersson | null | null | null | 0 | 4 | Entry not found |
autshumato | 2023-06-01T14:59:51.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:tn",
"language:ts",
"language:zu",
"lice... | null | Multilingual information access is stipulated in the South African constitution. In practise, this
is hampered by a lack of resources and capacity to perform the large volumes of translation
work required to realise multilingual information access. One of the aims of the Autshumato
project is to develop machine translation systems for three South African languages pairs. | @article{groenewald2010processing,
title={Processing parallel text corpora for three South African language pairs in the Autshumato project},
author={Groenewald, Hendrik J and du Plooy, Liza},
journal={AfLaT 2010},
pages={27},
year={2010}
} | null | 2 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
- tn
- ts
- zu
license:
- cc-by-2.5
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: autshumato
dataset_info:
- config_name: autshumato-en-tn
features:
- name: translation
dtype:
translation:
languages:
- en
- tn
splits:
- name: train
num_bytes: 28826392
num_examples: 159000
download_size: 9458762
dataset_size: 28826392
- config_name: autshumato-en-zu
features:
- name: translation
dtype:
translation:
languages:
- en
- zu
splits:
- name: train
num_bytes: 7188970
num_examples: 35489
download_size: 2068891
dataset_size: 7188970
- config_name: autshumato-en-ts
features:
- name: translation
dtype:
translation:
languages:
- en
- ts
splits:
- name: train
num_bytes: 50803849
num_examples: 450000
download_size: 15145915
dataset_size: 50803849
- config_name: autshumato-en-ts-manual
features:
- name: translation
dtype:
translation:
languages:
- en
- ts
splits:
- name: train
num_bytes: 10408757
num_examples: 92396
download_size: 2876924
dataset_size: 10408757
- config_name: autshumato-tn
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5132267
num_examples: 38206
download_size: 1599029
dataset_size: 5132267
- config_name: autshumato-ts
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3399674
num_examples: 58398
download_size: 974488
dataset_size: 3399674
config_names:
- autshumato-en-tn
- autshumato-en-ts
- autshumato-en-ts-manual
- autshumato-en-zu
- autshumato-tn
- autshumato-ts
---
# Dataset Card for autshumato
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://repo.sadilar.org/handle/20.500.12185/7/discover]()
- **Repository:** []()
- **Paper:** []()
- **Leaderboard:** []()
- **Point of Contact:** []()
### Dataset Summary
Multilingual information access is stipulated in the South African constitution. In practise, this
is hampered by a lack of resources and capacity to perform the large volumes of translation
work required to realise multilingual information access. One of the aims of the Autshumato
project is to develop machine translation systems for three South African languages pairs.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
[More Information Needed]
### Licensing Information
### Citation Information
```
@article{groenewald2010processing,
title={Processing parallel text corpora for three South African language pairs in the Autshumato project},
author={Groenewald, Hendrik J and du Plooy, Liza},
journal={AfLaT 2010},
pages={27},
year={2010}
}
```
### Contributions
Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset. |
bswac | 2022-11-03T16:15:55.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:bs",... | null | The Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian).
Version 1.0 of this corpus is described in http://www.aclweb.org/anthology/W14-0405. Version 1.1 contains newer and better linguistic annotations. | @misc{11356/1062,
title = {Bosnian web corpus {bsWaC} 1.1},
author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip},
url = {http://hdl.handle.net/11356/1062},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2016} } | null | 0 | 3 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- bs
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: BsWac
dataset_info:
features:
- name: sentence
dtype: string
config_name: bswac
splits:
- name: train
num_bytes: 9156258478
num_examples: 354581267
download_size: 1988514951
dataset_size: 9156258478
---
# Dataset Card for BsWac
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.ffzg.hr/resources/corpora/bswac/
- **Repository:** https://www.clarin.si/repository/xmlui/handle/11356/1062
- **Paper:** http://nlp.ffzg.hr/data/publications/nljubesi/ljubesic14-bs.pdf
- **Leaderboard:**
- **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr)
### Dataset Summary
The Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Dataset is monolingual in Bosnian language.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```
@misc{11356/1062,
title = {Bosnian web corpus {bsWaC} 1.1},
author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip},
url = {http://hdl.handle.net/11356/1062},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2016} }
```
### Contributions
Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset. |
capes | 2022-11-03T16:15:53.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"language:pt",
"license:unknown",
"dissertation-abstracts-translation",
"theses-translation",
"region:u... | null | A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the CAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil. The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were collected and aligned using the Hunalign algorithm. | @inproceedings{soares2018parallel,
title={A Parallel Corpus of Theses and Dissertations Abstracts},
author={Soares, Felipe and Yamashita, Gabrielli Harumi and Anzanello, Michel Jose},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={345--352},
year={2018},
organization={Springer}
} | null | 2 | 3 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- pt
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: capes
pretty_name: CAPES
tags:
- dissertation-abstracts-translation
- theses-translation
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- pt
config_name: en-pt
splits:
- name: train
num_bytes: 472484364
num_examples: 1157610
download_size: 162229298
dataset_size: 472484364
---
# Dataset Card for CAPES
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES](https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the
CAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil.
The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were
collected and aligned using the Hunalign algorithm.
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{soares2018parallel,
title={A Parallel Corpus of Theses and Dissertations Abstracts},
author={Soares, Felipe and Yamashita, Gabrielli Harumi and Anzanello, Michel Jose},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={345--352},
year={2018},
organization={Springer}
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
chr_en | 2023-06-01T14:59:50.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_categories:translation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"annotations_creators:found",
"annotations_creators:no-annotation",
"language_creators:found",
... | null | ChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English.
ChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation.
ChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning. | @inproceedings{zhang2020chren,
title={ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization},
author={Zhang, Shiyue and Frey, Benjamin and Bansal, Mohit},
booktitle={EMNLP2020},
year={2020}
} | null | 3 | 3 | ---
annotations_creators:
- expert-generated
- found
- no-annotation
language_creators:
- found
language:
- chr
- en
license:
- other
multilinguality:
- monolingual
- multilingual
- translation
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
- translation
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: chren
dataset_info:
- config_name: monolingual_raw
features:
- name: text_sentence
dtype: string
- name: text_title
dtype: string
- name: speaker
dtype: string
- name: date
dtype: int32
- name: type
dtype: string
- name: dialect
dtype: string
splits:
- name: full
num_bytes: 1210828
num_examples: 5210
download_size: 28899321
dataset_size: 1210828
- config_name: parallel_raw
features:
- name: line_number
dtype: string
- name: sentence_pair
dtype:
translation:
languages:
- en
- chr
- name: text_title
dtype: string
- name: speaker
dtype: string
- name: date
dtype: int32
- name: type
dtype: string
- name: dialect
dtype: string
splits:
- name: full
num_bytes: 5012923
num_examples: 14151
download_size: 28899321
dataset_size: 5012923
- config_name: monolingual
features:
- name: sentence
dtype: string
splits:
- name: chr
num_bytes: 882848
num_examples: 5210
- name: en5000
num_bytes: 615295
num_examples: 5000
- name: en10000
num_bytes: 1211645
num_examples: 10000
- name: en20000
num_bytes: 2432378
num_examples: 20000
- name: en50000
num_bytes: 6065780
num_examples: 49999
- name: en100000
num_bytes: 12130564
num_examples: 100000
download_size: 28899321
dataset_size: 23338510
- config_name: parallel
features:
- name: sentence_pair
dtype:
translation:
languages:
- en
- chr
splits:
- name: train
num_bytes: 3089658
num_examples: 11639
- name: dev
num_bytes: 260409
num_examples: 1000
- name: out_dev
num_bytes: 78134
num_examples: 256
- name: test
num_bytes: 264603
num_examples: 1000
- name: out_test
num_bytes: 80967
num_examples: 256
download_size: 28899321
dataset_size: 3773771
config_names:
- monolingual
- monolingual_raw
- parallel
- parallel_raw
---
# Dataset Card for ChrEn
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github repository for ChrEn](https://github.com/ZhangShiyue/ChrEn)
- **Paper:** [ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization](https://arxiv.org/abs/2010.04791)
- **Point of Contact:** [benfrey@email.unc.edu](benfrey@email.unc.edu)
### Dataset Summary
ChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English.
ChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation.
ChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning.
### Supported Tasks and Leaderboards
The dataset is intended to use for `machine-translation` between Enlish (`en`) and Cherokee (`chr`).
### Languages
The dataset contains Enlish (`en`) and Cherokee (`chr`) text. The data encompasses both existing dialects of Cherokee: the Overhill dialect, mostly spoken in Oklahoma (OK), and the Middle dialect, mostly used in North Carolina (NC).
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Many of the source texts were translations of English materials, which means that the Cherokee structures may not be 100% natural in terms of what a speaker might spontaneously produce. Each text was translated by people who speak Cherokee as the first language, which means there is a high probability of grammaticality. These data were originally available in PDF version. We apply the Optical Character Recognition (OCR) via Tesseract OCR engine to extract the Cherokee and English text.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The sentences were manually aligned by Dr. Benjamin Frey a proficient second-language speaker of Cherokee, who also fixed the errors introduced by OCR. This process is time-consuming and took several months.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was gathered and annotated by Shiyue Zhang, Benjamin Frey, and Mohit Bansal at UNC Chapel Hill.
### Licensing Information
The copyright of the data belongs to original book/article authors or translators (hence, used for research purpose; and please contact Dr. Benjamin Frey for other copyright questions).
### Citation Information
```
@inproceedings{zhang2020chren,
title={ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization},
author={Zhang, Shiyue and Frey, Benjamin and Bansal, Mohit},
booktitle={EMNLP2020},
year={2020}
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
coached_conv_pref | 2023-01-25T14:28:17.000Z | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:token-classification",
"task_ids:dialogue-modeling",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1... | null | A dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing
movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers,
where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits
the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The
assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her
preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with
entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of
entities. | @inproceedings{48414,
title = {Coached Conversational Preference Elicitation: A Case Study in Understanding Movie Preferences},
author = {Filip Radlinski and Krisztian Balog and Bill Byrne and Karthik Krishnamoorthi},
year = {2019},
booktitle = {Proceedings of the Annual SIGdial Meeting on Discourse and Dialogue}
} | null | 2 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- other
- text-generation
- fill-mask
- token-classification
task_ids:
- dialogue-modeling
- parsing
paperswithcode_id: coached-conversational-preference-elicitation
pretty_name: Coached Conversational Preference Elicitation
tags:
- Conversational Recommendation
dataset_info:
features:
- name: conversationId
dtype: string
- name: utterances
sequence:
- name: index
dtype: int32
- name: speaker
dtype:
class_label:
names:
'0': USER
'1': ASSISTANT
- name: text
dtype: string
- name: segments
sequence:
- name: startIndex
dtype: int32
- name: endIndex
dtype: int32
- name: text
dtype: string
- name: annotations
sequence:
- name: annotationType
dtype:
class_label:
names:
'0': ENTITY_NAME
'1': ENTITY_PREFERENCE
'2': ENTITY_DESCRIPTION
'3': ENTITY_OTHER
- name: entityType
dtype:
class_label:
names:
'0': MOVIE_GENRE_OR_CATEGORY
'1': MOVIE_OR_SERIES
'2': PERSON
'3': SOMETHING_ELSE
config_name: coached_conv_pref
splits:
- name: train
num_bytes: 2295579
num_examples: 502
download_size: 5191959
dataset_size: 2295579
---
# Dataset Card for Coached Conversational Preference Elicitation
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Coached Conversational Preference Elicitation Homepage](https://research.google/tools/datasets/coached-conversational-preference-elicitation/)
- **Repository:** [Coached Conversational Preference Elicitation Repository](https://github.com/google-research-datasets/ccpe)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/W19-5941/)
### Dataset Summary
A dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of entities.
### Supported Tasks and Leaderboards
* `other-other-Conversational Recommendation`: The dataset can be used to train a model for Conversational recommendation, which consists in Coached Conversation Preference Elicitation.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point comprises of a series of utterances between the 'assistant' and the 'user'. Each such utterance is annotated into categories mentioned in data fields.
An example from the Coached Conversational Preference Elicitation dataset looks as follows:
```
{'conversationId': 'CCPE-6faee',
'utterances': {'index': [0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15],
'segments': [{'annotations': [{'annotationType': [], 'entityType': []}],
'endIndex': [0],
'startIndex': [0],
'text': ['']},
{'annotations': [{'annotationType': [0], 'entityType': [0]},
{'annotationType': [1], 'entityType': [0]}],
'endIndex': [20, 27],
'startIndex': [14, 0],
'text': ['comedy', 'I really like comedy movies']},
{'annotations': [{'annotationType': [0], 'entityType': [0]}],
'endIndex': [24],
'startIndex': [16],
'text': ['comedies']},
{'annotations': [{'annotationType': [1], 'entityType': [0]}],
'endIndex': [15],
'startIndex': [0],
'text': ['I love to laugh']},
{'annotations': [{'annotationType': [], 'entityType': []}],
'endIndex': [0],
'startIndex': [0],
'text': ['']},
{'annotations': [{'annotationType': [0], 'entityType': [1]},
{'annotationType': [1], 'entityType': [1]}],
'endIndex': [21, 21],
'startIndex': [8, 0],
'text': ['Step Brothers', 'I liked Step Brothers']},
{'annotations': [{'annotationType': [], 'entityType': []}],
'endIndex': [0],
'startIndex': [0],
'text': ['']},
{'annotations': [{'annotationType': [1], 'entityType': [1]}],
'endIndex': [32],
'startIndex': [0],
'text': ['Had some amazing one-liners that']},
{'annotations': [{'annotationType': [], 'entityType': []}],
'endIndex': [0],
'startIndex': [0],
'text': ['']},
{'annotations': [{'annotationType': [0], 'entityType': [1]},
{'annotationType': [1], 'entityType': [1]}],
'endIndex': [15, 15],
'startIndex': [13, 0],
'text': ['RV', "I don't like RV"]},
{'annotations': [{'annotationType': [], 'entityType': []}],
'endIndex': [0],
'startIndex': [0],
'text': ['']},
{'annotations': [{'annotationType': [1], 'entityType': [1]},
{'annotationType': [1], 'entityType': [1]}],
'endIndex': [48, 66],
'startIndex': [18, 50],
'text': ['It was just so slow and boring', "I didn't like it"]},
{'annotations': [{'annotationType': [0], 'entityType': [1]}],
'endIndex': [63],
'startIndex': [33],
'text': ['Jurassic World: Fallen Kingdom']},
{'annotations': [{'annotationType': [0], 'entityType': [1]},
{'annotationType': [3], 'entityType': [1]}],
'endIndex': [52, 52],
'startIndex': [22, 0],
'text': ['Jurassic World: Fallen Kingdom',
'I have seen the movie Jurassic World: Fallen Kingdom']},
{'annotations': [{'annotationType': [], 'entityType': []}],
'endIndex': [0],
'startIndex': [0],
'text': ['']},
{'annotations': [{'annotationType': [1], 'entityType': [1]},
{'annotationType': [1], 'entityType': [1]},
{'annotationType': [1], 'entityType': [1]}],
'endIndex': [24, 125, 161],
'startIndex': [0, 95, 135],
'text': ['I really like the actors',
'I just really like the scenery',
'the dinosaurs were awesome']}],
'speaker': [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
'text': ['What kinds of movies do you like?',
'I really like comedy movies.',
'Why do you like comedies?',
"I love to laugh and comedy movies, that's their whole purpose. Make you laugh.",
'Alright, how about a movie you liked?',
'I liked Step Brothers.',
'Why did you like that movie?',
'Had some amazing one-liners that still get used today even though the movie was made awhile ago.',
'Well, is there a movie you did not like?',
"I don't like RV.",
'Why not?',
"And I just didn't It was just so slow and boring. I didn't like it.",
'Ok, then have you seen the movie Jurassic World: Fallen Kingdom',
'I have seen the movie Jurassic World: Fallen Kingdom.',
'What is it about these kinds of movies that you like or dislike?',
'I really like the actors. I feel like they were doing their best to make the movie better. And I just really like the scenery, and the the dinosaurs were awesome.']}}
```
### Data Fields
Each conversation has the following fields:
* `conversationId`: A unique random ID for the conversation. The ID has no meaning.
* `utterances`: An array of utterances by the workers.
Each utterance has the following fields:
* `index`: A 0-based index indicating the order of the utterances in the conversation.
* `speaker`: Either USER or ASSISTANT, indicating which role generated this utterance.
* `text`: The raw text as written by the ASSISTANT, or transcribed from the spoken recording of USER.
* `segments`: An array of semantic annotations of spans in the text.
Each semantic annotation segment has the following fields:
* `startIndex`: The position of the start of the annotation in the utterance text.
* `endIndex`: The position of the end of the annotation in the utterance text.
* `text`: The raw text that has been annotated.
* `annotations`: An array of annotation details for this segment.
Each annotation has two fields:
* `annotationType`: The class of annotation (see ontology below).
* `entityType`: The class of the entity to which the text refers (see ontology below).
**EXPLANATION OF ONTOLOGY**
In the corpus, preferences and the entities that these preferences refer to are annotated with an annotation type as well as an entity type.
Annotation types fall into four categories:
* `ENTITY_NAME` (0): These mark the names of relevant entities mentioned.
* `ENTITY_PREFERENCE` (1): These are defined as statements indicating that the dialog participant does or does not like the relevant entity in general, or that they do or do not like some aspect of the entity. This may also be thought of the participant having some sentiment about what is being discussed.
* `ENTITY_DESCRIPTION` (2): Neutral descriptions that describe an entity but do not convey an explicit liking or disliking.
* `ENTITY_OTHER` (3): Other relevant statements about an entity that convey relevant information of how the participant relates to the entity but do not provide a sentiment. Most often, these relate to whether a participant has seen a particular movie, or knows a lot about a given entity.
Entity types are marked as belonging to one of four categories:
* `MOVIE_GENRE_OR_CATEGORY` (0): For genres or general descriptions that capture a particular type or style of movie.
* `MOVIE_OR_SERIES` (1): For the full or partial name of a movie or series of movies.
* `PERSON` (2): For the full or partial name of an actual person.
* `SOMETHING_ELSE ` (3): For other important proper nouns, such as the names of characters or locations.
### Data Splits
There is a single split of the dataset named 'train' which contains the whole datset.
| | Train |
| ------------------- | ----- |
| Input Conversations | 502 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{radlinski-etal-2019-ccpe,
title = {Coached Conversational Preference Elicitation: A Case Study in Understanding Movie Preferences},
author = {Filip Radlinski and Krisztian Balog and Bill Byrne and Karthik Krishnamoorthi},
booktitle = {Proceedings of the Annual Meeting of the Special Interest Group on Discourse and Dialogue ({SIGDIAL})},
year = 2019
}
```
### Contributions
Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset. |
dyk | 2023-01-25T14:29:39.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:bsd-3-clause",
"region:us"
] | null | The Did You Know (pol. Czy wiesz?) dataset consists of human-annotated question-answer pairs. The task is to predict if the answer is correct. We chose the negatives which have the largest token overlap with a question. | @inproceedings{marcinczuk2013open,
title={Open dataset for development of Polish Question Answering systems},
author={Marcinczuk, Michal and Ptak, Marcin and Radziszewski, Adam and Piasecki, Maciej},
booktitle={Proceedings of the 6th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, Wydawnictwo Poznanskie, Fundacja Uniwersytetu im. Adama Mickiewicza},
year={2013}
} | null | 0 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- bsd-3-clause
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
pretty_name: dyk
dataset_info:
features:
- name: q_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: target
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 1388690
num_examples: 4154
- name: test
num_bytes: 353643
num_examples: 1029
download_size: 685462
dataset_size: 1742333
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://nlp.pwr.wroc.pl/en/tools-and-resources/resources/czy-wiesz-question-answering-dataset
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Did You Know (pol. Czy wiesz?) dataset consists of human-annotated question-answer pairs. The task is to predict if the answer is correct. We chose the negatives which have the largest token overlap with a question.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- q_id: question id
- question: question sentence
- answer: answer sentence
- target: 1 if the answer is correct, 0 otherwise. Note that the test split doesn't have target values so -1 is used instead
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-SA 3.0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset. |
eduge | 2023-01-25T14:29:42.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:mn",
"license:unknown",
"region:us"
] | null | Eduge news classification dataset is provided by Bolorsoft LLC. It is used for training the Eduge.mn production news classifier
75K news articles in 9 categories: урлаг соёл, эдийн засаг, эрүүл мэнд, хууль, улс төр, спорт, технологи, боловсрол and байгал орчин | null | null | 3 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- mn
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: Eduge
dataset_info:
features:
- name: news
dtype: string
- name: label
dtype:
class_label:
names:
'0': урлаг соёл
'1': эдийн засаг
'2': эрүүл мэнд
'3': хууль
'4': улс төр
'5': спорт
'6': технологи
'7': боловсрол
'8': байгал орчин
splits:
- name: train
num_bytes: 255275842
num_examples: 60528
- name: test
num_bytes: 64451731
num_examples: 15133
download_size: 320395067
dataset_size: 319727573
---
# Dataset Card for Eduge
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://eduge.mn/
- **Repository:** https://github.com/tugstugi/mongolian-nlp
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Eduge news classification dataset provided by Bolorsoft LLC. Used to train the Eduge.mn production news classifier
75K news articles in 9 categories: урлаг соёл, эдийн засаг, эрүүл мэнд, хууль, улс төр, спорт, технологи, боловсрол and байгал орчин
### Supported Tasks and Leaderboards
- `text-classification`: We can transform the above into a 9-class classification task.
### Languages
The text in the dataset is in Mongolian
## Dataset Structure
### Data Instances
For the `default` configuration:
```
{
'label': 0, # 'урлаг соёл'
'news': 'Шударга өрсөлдөөн, хэрэглэгчийн төлөө газар 2013 оны дөрөвдүгээр сараас эхлэн Монгол киноны ашиг орлогын мэдээллийг олон нийтэд хүргэж байгаа. Ингэснээр Монголын кино үйлдвэрлэгчид улсад ашиг орлогоо шударгаар төлөх, мөн чанартай уран бүтээлийн тоо өсөх боломж бүрдэж байгаа юм.',
}
```
### Data Fields
- `news`: a complete news article on a specific topic as a string
- `label`: the single class of the topic, among these values: "урлаг соёл" (0), "эдийн засаг" (1), "эрүүл мэнд" (2), "хууль" (3), "улс төр" (4), "спорт" (5), "технологи" (6), "боловсрол" (7), "байгал орчин" (8).
### Data Splits
The set of complete articles is split into a training and test set.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
Eduge.mn which is a combination from shuud.mn, ikon.mn, olloo.mn, news.gogo.mn, montsame.mn, zaluu.com, sonin.mn, medee.mn, bloombergtv.mn.
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
No citation available for this dataset.
### Contributions
Thanks to [@enod](https://github.com/enod) for adding this dataset. |
eitb_parcc | 2022-11-03T16:15:31.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:es",
"language:eu",
"license:unknown",
"region:us"
] | null | EiTB-ParCC: Parallel Corpus of Comparable News. A Basque-Spanish parallel corpus provided by Vicomtech (https://www.vicomtech.org), extracted from comparable news produced by the Basque public broadcasting group Euskal Irrati Telebista. | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | null | 1 | 3 | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
- eu
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: eitb-parcc
pretty_name: EiTB-ParCC
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- es
- eu
config_name: es-eu
splits:
- name: train
num_bytes: 139039398
num_examples: 637183
download_size: 57244346
dataset_size: 139039398
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[EiTB-ParCC: Parallel Corpus of Comparable New](http://opus.nlpl.eu/EiTB-ParCC.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
EiTB-ParCC: Parallel Corpus of Comparable News. A Basque-Spanish parallel corpus provided by \
Vicomtech (https://www.vicomtech.org), extracted from comparable news produced by the \
Basque public broadcasting group Euskal Irrati Telebista.
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
eth_py150_open | 2022-11-18T20:01:17.000Z | [
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"contextual-embeddings",
"region:us"
] | null | A redistributable subset of the ETH Py150 corpus, introduced in the ICML 2020 paper 'Learning and Evaluating Contextual Embedding of Source Code' | @inproceedings{kanade2020learning,
title={Learning and Evaluating Contextual Embedding of Source Code},
author={Kanade, Aditya and Maniatis, Petros and Balakrishnan, Gogul and Shi, Kensen},
booktitle={International Conference on Machine Learning},
pages={5110--5121},
year={2020},
organization={PMLR}
} | null | 0 | 3 | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: eth-py150-open
pretty_name: ethpy150open
tags:
- contextual-embeddings
dataset_info:
features:
- name: filepath
dtype: string
- name: license
dtype: string
config_name: eth_py150_open
splits:
- name: train
num_bytes: 5414978
num_examples: 74749
- name: test
num_bytes: 3006199
num_examples: 41457
- name: validation
num_bytes: 598524
num_examples: 8302
download_size: 13875671
dataset_size: 9019701
---
# Dataset Card for ethpy150open
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.sri.inf.ethz.ch/py150
- **Repository:** https://github.com/google-research-datasets/eth_py150_open
- **Paper:** https://proceedings.icml.cc/static/paper_files/icml/2020/5401-Paper.pdf
- **Leaderboard:** None
- **Point of Contact:** Aditya Kanade <kanade@iisc.ac.in>, Petros Maniatis <maniatis@google.com>
### Dataset Summary
A redistributable subset of the [ETH Py150 corpus](https://www.sri.inf.ethz.ch/py150), introduced in the ICML 2020 paper ['Learning and Evaluating Contextual Embedding of Source Code'](https://proceedings.icml.cc/static/paper_files/icml/2020/5401-Paper.pdf)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
List of dicts of
{
"filepath": The relative URL containing the path to the file on GitHub
"license": The license used for that specific file or repository
}
### Data Instances
{
"filepath": "0rpc/zerorpc-python/setup.py",
"license": "mit"
},
{
"filepath": "0rpc/zerorpc-python/zerorpc/heartbeat.py",
"license": "mit"
},
### Data Fields
- `filepath`: The relative URL containing the path to the file on GitHub
- `license`: The license used for that specific file or repository
### Data Splits
| | Train | Valid | Test |
| ----- | ------- | ----- | ----- |
| Dataset Split | 74749 | 8302 | 41457 |
## Dataset Creation
The original dataset is at https://www.sri.inf.ethz.ch/py150
### Curation Rationale
To generate a more redistributable version of the dataset
### Source Data
#### Initial Data Collection and Normalization
All the urls are filepaths relative to GitHub and the master branch was used as available at the time
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Apache License 2.0
### Citation Information
@inproceedings{kanade2020learning,
title={Learning and Evaluating Contextual Embedding of Source Code},
author={Kanade, Aditya and Maniatis, Petros and Balakrishnan, Gogul and Shi, Kensen},
booktitle={International Conference on Machine Learning},
pages={5110--5121},
year={2020},
organization={PMLR}
}
### Contributions
Thanks to [@Bharat123rox](https://github.com/Bharat123rox) for adding this dataset. |
finer | 2023-01-25T14:30:30.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:fi",
"license:mit",
"arxiv:1908.04212",
"region:us"
... | null | The directory data contains a corpus of Finnish technology related news articles with a manually prepared
named entity annotation (digitoday.2014.csv). The text material was extracted from the archives of Digitoday,
a Finnish online technology news source (www.digitoday.fi). The corpus consists of 953 articles
(193,742 word tokens) with six named entity classes (organization, location, person, product, event, and date).
The corpus is available for research purposes and can be readily used for development of NER systems for Finnish. | @article{ruokolainen2019finnish,
title={A finnish news corpus for named entity recognition},
author={Ruokolainen, Teemu and Kauppinen, Pekka and Silfverberg, Miikka and Lind{\'e}n, Krister},
journal={Language Resources and Evaluation},
pages={1--26},
year={2019},
publisher={Springer}
} | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- fi
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: finer
pretty_name: Finnish News Corpus for Named Entity Recognition
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-DATE
'2': B-EVENT
'3': B-LOC
'4': B-ORG
'5': B-PER
'6': B-PRO
'7': I-DATE
'8': I-EVENT
'9': I-LOC
'10': I-ORG
'11': I-PER
'12': I-PRO
- name: nested_ner_tags
sequence:
class_label:
names:
'0': O
'1': B-DATE
'2': B-EVENT
'3': B-LOC
'4': B-ORG
'5': B-PER
'6': B-PRO
'7': I-DATE
'8': I-EVENT
'9': I-LOC
'10': I-ORG
'11': I-PER
'12': I-PRO
config_name: finer
splits:
- name: train
num_bytes: 5159550
num_examples: 13497
- name: validation
num_bytes: 387494
num_examples: 986
- name: test
num_bytes: 1327354
num_examples: 3512
- name: test_wikipedia
num_bytes: 1404397
num_examples: 3360
download_size: 3733127
dataset_size: 8278795
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/mpsilfve/finer-data)
- **Repository:** [Github](https://github.com/mpsilfve/finer-data)
- **Paper:** [Arxiv](https://arxiv.org/abs/1908.04212)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each row consists of the following fields:
* `id`: The sentence id
* `tokens`: An ordered list of tokens from the full text
* `ner_tags`: Named entity recognition tags for each token
* `nested_ner_tags`: Nested named entity recognition tags for each token
Note that by design, the length of `tokens`, `ner_tags`, and `nested_ner_tags` will always be identical.
`ner_tags` and `nested_ner_tags` correspond to the list below:
```
[ "O", "B-DATE", "B-EVENT", "B-LOC", "B-ORG", "B-PER", "B-PRO", "I-DATE", "I-EVENT", "I-LOC", "I-ORG", "I-PER", "I-PRO" ]
```
IOB2 labeling scheme is used.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@stefan-it](https://github.com/stefan-it) for adding this dataset. |
fquad | 2023-04-05T10:06:27.000Z | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:extractive-qa",
"task_ids:closed-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datase... | null | FQuAD: French Question Answering Dataset
We introduce FQuAD, a native French Question Answering Dataset. FQuAD contains 25,000+ question and answer pairs.
Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%. | @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and
Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
} | null | 7 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- fr
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
- text-retrieval
task_ids:
- extractive-qa
- closed-domain-qa
paperswithcode_id: fquad
pretty_name: 'FQuAD: French Question Answering Dataset'
dataset_info:
features:
- name: context
dtype: string
- name: questions
sequence: string
- name: answers
sequence:
- name: texts
dtype: string
- name: answers_starts
dtype: int32
splits:
- name: train
num_bytes: 5898752
num_examples: 4921
- name: validation
num_bytes: 1031456
num_examples: 768
download_size: 0
dataset_size: 6930208
---
# Dataset Card for FQuAD
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://fquad.illuin.tech/](https://fquad.illuin.tech/)
- **Paper:** [FQuAD: French Question Answering Dataset](https://arxiv.org/abs/2002.06071)
- **Point of Contact:** [https://www.illuin.tech/contact/](https://www.illuin.tech/contact/)
- **Size of downloaded dataset files:** 3.29 MB
- **Size of the generated dataset:** 6.94 MB
- **Total amount of disk used:** 10.23 MB
### Dataset Summary
FQuAD: French Question Answering Dataset
We introduce FQuAD, a native French Question Answering Dataset.
FQuAD contains 25,000+ question and answer pairs.
Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%.
Developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.
Please, note this dataset is licensed for non-commercial purposes and users must agree to the following terms and conditions:
1. Use FQuAD only for internal research purposes.
2. Not make any copy except a safety one.
3. Not redistribute it (or part of it) in any way, even for free.
4. Not sell it or use it for any commercial purpose. Contact us for a possible commercial licence.
5. Mention the corpus origin and Illuin Technology in all publications about experiments using FQuAD.
6. Redistribute to Illuin Technology any improved or enriched version you could make of that corpus.
Request manually download of the data from: https://fquad.illuin.tech/
### Supported Tasks and Leaderboards
- `closed-domain-qa`, `text-retrieval`: This dataset is intended to be used for `closed-domain-qa`, but can also be used for information retrieval tasks.
### Languages
This dataset is exclusively in French, with context data from Wikipedia and questions from French university students (`fr`).
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 3.29 MB
- **Size of the generated dataset:** 6.94 MB
- **Total amount of disk used:** 10.23 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answers_starts": [161, 46, 204],
"texts": ["La Vierge aux rochers", "documents contemporains", "objets de spéculations"]
},
"context": "\"Les deux tableaux sont certes décrits par des documents contemporains à leur création mais ceux-ci ne le font qu'indirectement ...",
"questions": ["Que concerne principalement les documents ?", "Par quoi sont décrit les deux tableaux ?", "Quels types d'objets sont les deux tableaux aux yeux des chercheurs ?"]
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `context`: a `string` feature.
- `questions`: a `list` of `string` features.
- `answers`: a dictionary feature containing:
- `texts`: a `string` feature.
- `answers_starts`: a `int32` feature.
### Data Splits
The FQuAD dataset has 3 splits: _train_, _validation_, and _test_. The _test_ split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split.
Dataset Split | Number of Articles in Split | Number of paragraphs in split | Number of questions in split
--------------|------------------------------|--------------------------|-------------------------
Train | 117 | 4921 | 20731
Validation | 768 | 51.0% | 3188
Test | 10 | 532 | 2189
## Dataset Creation
### Curation Rationale
The FQuAD dataset was created by Illuin technology. It was developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.
### Source Data
The text used for the contexts are from the curated list of French High-Quality Wikipedia [articles](https://fr.wikipedia.org/wiki/Cat%C3%A9gorie:Article_de_qualit%C3%A9).
### Annotations
Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering.
Wikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans.
Annotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context.
### Personal and Sensitive Information
No personal or sensitive information is included in this dataset. This has been manually verified by the dataset curators.
## Considerations for Using the Data
Users should consider this dataset is sampled from Wikipedia data which might not be representative of all QA use cases.
### Social Impact of Dataset
The social biases of this dataset have not yet been investigated.
### Discussion of Biases
The social biases of this dataset have not yet been investigated, though articles have been selected by their quality and objectivity.
### Other Known Limitations
The limitations of the FQuAD dataset have not yet been investigated.
## Additional Information
### Dataset Curators
Illuin Technology: [https://fquad.illuin.tech/](https://fquad.illuin.tech/)
### Licensing Information
The FQuAD dataset is licensed under the [CC BY-NC-SA 3.0](https://creativecommons.org/licenses/by-nc-sa/3.0/fr/) license.
It allows personal and academic research uses of the dataset, but not commercial uses. So concretely, the dataset cannot be used to train a model that is then put into production within a business or a company. For this type of commercial use, we invite FQuAD users to contact [the authors](https://www.illuin.tech/contact/) to discuss possible partnerships.
### Citation Information
```
@ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and
Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
Thanks to [@ManuelFay](https://github.com/manuelfay) for providing information on the dataset creation process. |
giga_fren | 2022-11-03T16:15:21.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"language:fr",
"license:unknown",
"region:us"
] | null | Giga-word corpus for French-English from WMT2010 collected by Chris Callison-Burch
2 languages, total number of files: 452
total number of tokens: 1.43G
total number of sentence fragments: 47.55M | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | null | 0 | 3 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- fr
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: GigaFren
dataset_info:
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
config_name: en-fr
splits:
- name: train
num_bytes: 8690296821
num_examples: 22519904
download_size: 2701536198
dataset_size: 8690296821
---
# Dataset Card for GigaFren
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/giga-fren.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
hausa_voa_ner | 2023-01-25T14:31:51.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ha",
"license:cc-by-4.0",
"region:us"
] | null | The Hausa VOA NER dataset is a labeled dataset for named entity recognition in Hausa. The texts were obtained from
Hausa Voice of America News articles https://www.voahausa.com/ . We concentrate on
four types of named entities: persons [PER], locations [LOC], organizations [ORG], and dates & time [DATE].
The Hausa VOA NER data files contain 2 columns separated by a tab ('\t'). Each word has been put on a separate line and
there is an empty line after each sentences i.e the CoNLL format. The first item on each line is a word, the second
is the named entity tag. The named entity tags have the format I-TYPE which means that the word is inside a phrase
of type TYPE. For every multi-word expression like 'New York', the first word gets a tag B-TYPE and the subsequent words
have tags I-TYPE, a word with tag O is not part of a phrase. The dataset is in the BIO tagging scheme.
For more details, see https://www.aclweb.org/anthology/2020.emnlp-main.204/ | @inproceedings{hedderich-etal-2020-transfer,
title = "Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on {A}frican Languages",
author = "Hedderich, Michael A. and
Adelani, David and
Zhu, Dawei and
Alabi, Jesujoba and
Markus, Udia and
Klakow, Dietrich",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.204",
doi = "10.18653/v1/2020.emnlp-main.204",
pages = "2580--2591",
} | null | 2 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ha
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Hausa VOA NER Corpus
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-DATE
'8': I-DATE
config_name: hausa_voa_ner
splits:
- name: train
num_bytes: 483634
num_examples: 1015
- name: validation
num_bytes: 69673
num_examples: 146
- name: test
num_bytes: 139227
num_examples: 292
download_size: 324962
dataset_size: 692534
---
# Dataset Card for Hausa VOA NER Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.aclweb.org/anthology/2020.emnlp-main.204/
- **Repository:** [Hausa VOA NER](https://github.com/uds-lsv/transfer-distant-transformer-african/tree/master/data/hausa_ner)
- **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.204/
- **Leaderboard:**
- **Point of Contact:** [David Adelani](mailto:didelani@lsv.uni-saarland.de)
### Dataset Summary
The Hausa VOA NER is a named entity recognition (NER) dataset for Hausa language based on the [VOA Hausa news](https://www.voahausa.com/) corpus.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Hausa.
## Dataset Structure
### Data Instances
A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
{'id': '0',
'ner_tags': [B-PER, 0, 0, B-LOC, 0],
'tokens': ['Trump', 'ya', 'ce', 'Rasha', 'ma']
}
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE",
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & times (DATE). (O) is used for tokens not considered part of any named entity.
### Data Splits
Training (1,014 sentences), validation (145 sentences) and test split (291 sentences)
## Dataset Creation
### Curation Rationale
The data was created to help introduce resources to new language - Hausa.
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The dataset is based on the news domain and was crawled from [VOA Hausa news](https://www.voahausa.com/).
[More Information Needed]
#### Who are the source language producers?
The dataset was collected from VOA Hausa news. Most of the texts used in creating the Hausa VOA NER are news stories from Nigeria, Niger Republic, United States, and other parts of the world.
[More Information Needed]
### Annotations
Named entity recognition annotation
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data was annotated by Jesujoba Alabi and David Adelani for the paper:
[Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages](https://www.aclweb.org/anthology/2020.emnlp-main.204/).
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotated data sets were developed by students of Saarland University, Saarbrücken, Germany .
### Licensing Information
The data is under the [Creative Commons Attribution 4.0 ](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{hedderich-etal-2020-transfer,
title = "Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on {A}frican Languages",
author = "Hedderich, Michael A. and
Adelani, David and
Zhu, Dawei and
Alabi, Jesujoba and
Markus, Udia and
Klakow, Dietrich",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.204",
doi = "10.18653/v1/2020.emnlp-main.204",
pages = "2580--2591",
}
```
### Contributions
Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset. |
hebrew_this_world | 2022-11-03T16:08:08.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:he... | null | HebrewThisWorld is a data set consists of 2028 issues of the newspaper 'This World' edited by Uri Avnery and were published between 1950 and 1989. Released under the AGPLv3 license. | null | null | 1 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- he
license:
- agpl-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: HebrewSentiment
dataset_info:
features:
- name: issue_num
dtype: int64
- name: page_count
dtype: int64
- name: date
dtype: string
- name: date_he
dtype: string
- name: year
dtype: string
- name: href
dtype: string
- name: pdf
dtype: string
- name: coverpage
dtype: string
- name: backpage
dtype: string
- name: content
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 678389435
num_examples: 2028
download_size: 678322912
dataset_size: 678389435
---
# Dataset Card for HebrewSentiment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://thisworld.online/
- **Repository:** https://github.com/thisworld1/thisworld.online
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
HebrewThisWorld is a data set consists of 2028 issues of the newspaper 'This World' edited by Uri Avnery and were published between 1950 and 1989. Released under the AGPLv3 license.
Data Annotation:
### Supported Tasks and Leaderboards
Language modeling
### Languages
Hebrew
## Dataset Structure
csv file with "," delimeter
### Data Instances
Sample:
```json
{
"issue_num": 637,
"page_count": 16,
"date": "1950-01-01",
"date_he": "1 בינואר 1950",
"year": "1950",
"href": "https://thisworld.online/1950/637",
"pdf": "https://olam.eu-central-1.linodeobjects.com/pdfs/B-I0637-D010150.pdf",
"coverpage": "https://olam.eu-central-1.linodeobjects.com/pages/637/t-1.png",
"backpage": "https://olam.eu-central-1.linodeobjects.com/pages/637/t-16.png",
"content": "\nלפיד\nהנוער ־ בירושלים צילומים :\n\nב. רותנברג\n\nוזהו הלפיד\n...",
"url": "https://thisworld.online/api/1950/637"
}
```
### Data Fields
- `issue_num`: ID/Number of the issue
- `page_count`: Page count of the current issue
- `date`: Published date
- `date_he`: Published date in Hebrew
- `year`: Year of the issue
- `href`: URL to the issue to scan/print etc.
- `pdf`: URL to the issue to scan in pdf
- `coverpage`: URL to coverpage
- `backpage`: URL to backpage
- `content`: text content of the issue
- `url`: URL
### Data Splits
| | train |
|--------|------:|
| corpus | 2028 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[thisworld.online](https://thisworld.online/)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Researchers
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
GNU AGPLv3+
This is free software, and you are welcome to redistribute it under certain conditions.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
### Citation Information
https://thisworld.online/
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@imvladikon](https://github.com/imvladikon) for adding this dataset. |
hrenwac_para | 2022-11-03T16:07:49.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:hr",
"license:cc-by-sa-3.0",
"region:us"
] | null | The hrenWaC corpus version 2.0 consists of parallel Croatian-English texts crawled from the .hr top-level domain for Croatia.
The corpus was built with Spidextor (https://github.com/abumatran/spidextor), a tool that glues together the output of SpiderLing used for crawling and Bitextor used for bitext extraction. The accuracy of the extracted bitext on the segment level is around 80% and on the word level around 84%. | @misc{11356/1058,
title = {Croatian-English parallel corpus {hrenWaC} 2.0},
author = {Ljube{\v s}i{\'c}, Nikola and Espl{\'a}-Gomis, Miquel and Ortiz Rojas, Sergio and Klubi{\v c}ka, Filip and Toral, Antonio},
url = {http://hdl.handle.net/11356/1058},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {{CLARIN}.{SI} User Licence for Internet Corpora},
year = {2016} } | null | 0 | 3 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
- hr
license:
- cc-by-sa-3.0
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: HrenwacPara
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- hr
config_name: hrenWaC
splits:
- name: train
num_bytes: 29602110
num_examples: 99001
download_size: 11640281
dataset_size: 29602110
---
# Dataset Card for hrenwac_para
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.ffzg.hr/resources/corpora/hrenwac/
- **Repository:** http://nlp.ffzg.hr/data/corpora/hrenwac/hrenwac.en-hr.txt.gz
- **Paper:** http://workshop2013.iwslt.org/downloads/IWSLT-2013-Cettolo.pdf
- **Leaderboard:**
- **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr)
### Dataset Summary
The hrenWaC corpus version 2.0 consists of parallel Croatian-English texts crawled from the .hr top-level domain for Croatia. The corpus was built with Spidextor (https://github.com/abumatran/spidextor), a tool that glues together the output of SpiderLing used for crawling and Bitextor used for bitext extraction. The accuracy of the extracted bitext on the segment level is around 80% and on the word level around 84%.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Dataset is bilingual with Croatian and English languages.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```
@misc{11356/1058,
title = {Croatian-English parallel corpus {hrenWaC} 2.0},
author = {Ljube{\v s}i{\'c}, Nikola and Espl{\`a}-Gomis, Miquel and Ortiz Rojas, Sergio and Klubi{\v c}ka, Filip and Toral, Antonio},
url = {http://hdl.handle.net/11356/1058},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {{CLARIN}.{SI} User Licence for Internet Corpora},
year = {2016} }
```
### Contributions
Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset. |
hrwac | 2022-11-03T16:15:15.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:hr",
... | null | The Croatian web corpus hrWaC was built by crawling the .hr top-level domain in 2011 and again in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Croatian vs. Serbian).
Version 2.0 of this corpus is described in http://www.aclweb.org/anthology/W14-0405. Version 2.1 contains newer and better linguistic annotations. | @misc{11356/1064,
title = {Croatian web corpus {hrWaC} 2.1},
author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip},
url = {http://hdl.handle.net/11356/1064},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2016} } | null | 0 | 3 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- hr
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1B<n<10B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: HrWac
dataset_info:
features:
- name: sentence
dtype: string
config_name: hrwac
splits:
- name: train
num_bytes: 43994569015
num_examples: 1736944727
download_size: 9217221471
dataset_size: 43994569015
---
# Dataset Card for HrWac
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.ffzg.hr/resources/corpora/hrwac/
- **Repository:** https://www.clarin.si/repository/xmlui/handle/11356/1064
- **Paper:** http://nlp.ffzg.hr/data/publications/nljubesi/ljubesic11-hrwac.pdf
- **Leaderboard:**
- **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr)
### Dataset Summary
The Croatian web corpus hrWaC was built by crawling the .hr top-level domain in 2011 and again in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Croatian vs. Serbian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Dataset is monolingual in Croatian language.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- sentence: sentences as strings
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```
@misc{11356/1064,
title = {Croatian web corpus {hrWaC} 2.1},
author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip},
url = {http://hdl.handle.net/11356/1064},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2016} }
```
### Contributions
Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset. |
id_puisi | 2022-11-03T16:08:09.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:fill-mask",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:id",
"license:mit",
"poem-gene... | null | Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. | null | null | 2 | 3 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- id
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
- fill-mask
task_ids: []
paperswithcode_id: null
pretty_name: Indonesian Puisi
tags:
- poem-generation
dataset_info:
features:
- name: title
dtype: string
- name: author
dtype: string
- name: puisi
dtype: string
- name: puisi_with_header
dtype: string
splits:
- name: train
num_bytes: 10613475
num_examples: 7223
download_size: 10558108
dataset_size: 10613475
---
# Dataset Card for id_puisi
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [puisi-pantun-generator](https://github.com/ilhamfp/puisi-pantun-generator)
- **Repository:** [puisi-pantun-generator](https://github.com/ilhamfp/puisi-pantun-generator)
- **Paper:** [N/A]
- **Leaderboard:** [N/A]
- **Point of Contact:** [Ilham Firdausi Putra](ilhamfputra31@gmail.com)
### Dataset Summary
Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Indonesian
## Dataset Structure
### Data Instances
```
{
'puisi_with_header': 'TEPERANGKAP
Oleh Mangku Langit Jingga
Mungkin kau membiarkan aku
Membiarkan perasaan ini larut
Memberi ruang jiwaku hampa
Agar tetap terbiasa nikmati
Perangkap yang kau buat
Perisai yang kau banggakan
Takkan jadi tameng bagimu
Aku mengerti betapa hebatnya
Perangkap mu hei sang dewi
Ku akan terus merasa terbiasa
Dengan pesona indahmu
Ku masih akan nikmati hadirmu
Berjalanlah pada hati yang sama
Satu hati denganku
Walau ku terperangkap
Namunku nikmati dan jalani',
'title': 'TEPERANGKAP',
'author': 'Oleh Mangku Langit Jingga',
'puisi': 'Mungkin kau membiarkan aku
Membiarkan perasaan ini larut
Memberi ruang jiwaku hampa
Agar tetap terbiasa nikmati
Perangkap yang kau buat
Perisai yang kau banggakan
Takkan jadi tameng bagimu
Aku mengerti betapa hebatnya
Perangkap mu hei sang dewi
Ku akan terus merasa terbiasa
Dengan pesona indahmu
Ku masih akan nikmati hadirmu
Berjalanlah pada hati yang sama
Satu hati denganku
Walau ku terperangkap
Namunku nikmati dan jalani',
}
```
### Data Fields
- `puisi_with_header`: the raw text from scraping
- `title`: the title extracted from the raw text using regex
- `author`: the author extracted from the raw text using regex
- `puisi`: the poem with title and author extracted out using regex
### Data Splits
The dataset contains only a train set.
## Dataset Creation
### Curation Rationale
The dataset was initially collected as an experiment to generate an Indonesian poem using GPT-2.
### Source Data
#### Initial Data Collection and Normalization
The dataset was scraped using BeautifulSoup from lokerpuisi.web.id (the data no longer exist on the original blog). The title and author column was produced using regex match from puisi_with_header column.
#### Who are the source language producers?
The poems were generated by humans. The users of the original blog voluntarily submit their original poems to get published on the blog.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The regex match used to extract the title & author from the raw text is not perfect. Some title & text is still failed to get extracted.
## Additional Information
### Dataset Curators
Ilham Firdausi Putra
### Licensing Information
MIT License
### Citation Information
[N/A]
### Contributions
Thanks to [@ilhamfp](https://github.com/ilhamfp) for adding this dataset. |
imppres | 2023-01-25T14:32:53.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | null | Over >25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. IMPPRES is an NLI dataset following the format of SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018) and XNLI (Conneau et al., 2018), which was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures. | @inproceedings{jeretic-etal-2020-natural,
title = "Are Natural Language Inference Models {IMPPRESsive}? {L}earning {IMPlicature} and {PRESupposition}",
author = "Jereti\v{c}, Paloma and
Warstadt, Alex and
Bhooshan, Suvrat and
Williams, Adina",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.768",
doi = "10.18653/v1/2020.acl-main.768",
pages = "8690--8705",
abstract = "Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of 32K semi-automatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by {``}some{''} as entailments. For some presupposition triggers like {``}only{''}, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.",
} | null | 0 | 3 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
paperswithcode_id: imppres
pretty_name: IMPPRES
dataset_info:
- config_name: presupposition_all_n_presupposition
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: trigger
dtype: string
- name: trigger1
dtype: string
- name: trigger2
dtype: string
- name: presupposition
dtype: string
- name: gold_label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: UID
dtype: string
- name: pairID
dtype: string
- name: paradigmID
dtype: int16
splits:
- name: all_n_presupposition
num_bytes: 458492
num_examples: 1900
download_size: 335088
dataset_size: 458492
- config_name: presupposition_both_presupposition
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: trigger
dtype: string
- name: trigger1
dtype: string
- name: trigger2
dtype: string
- name: presupposition
dtype: string
- name: gold_label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: UID
dtype: string
- name: pairID
dtype: string
- name: paradigmID
dtype: int16
splits:
- name: both_presupposition
num_bytes: 432792
num_examples: 1900
download_size: 335088
dataset_size: 432792
- config_name: presupposition_change_of_state
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: trigger
dtype: string
- name: trigger1
dtype: string
- name: trigger2
dtype: string
- name: presupposition
dtype: string
- name: gold_label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: UID
dtype: string
- name: pairID
dtype: string
- name: paradigmID
dtype: int16
splits:
- name: change_of_state
num_bytes: 308627
num_examples: 1900
download_size: 335088
dataset_size: 308627
- config_name: presupposition_cleft_existence
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: trigger
dtype: string
- name: trigger1
dtype: string
- name: trigger2
dtype: string
- name: presupposition
dtype: string
- name: gold_label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: UID
dtype: string
- name: pairID
dtype: string
- name: paradigmID
dtype: int16
splits:
- name: cleft_existence
num_bytes: 363238
num_examples: 1900
download_size: 335088
dataset_size: 363238
- config_name: presupposition_cleft_uniqueness
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: trigger
dtype: string
- name: trigger1
dtype: string
- name: trigger2
dtype: string
- name: presupposition
dtype: string
- name: gold_label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: UID
dtype: string
- name: pairID
dtype: string
- name: paradigmID
dtype: int16
splits:
- name: cleft_uniqueness
num_bytes: 388779
num_examples: 1900
download_size: 335088
dataset_size: 388779
- config_name: presupposition_only_presupposition
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: trigger
dtype: string
- name: trigger1
dtype: string
- name: trigger2
dtype: string
- name: presupposition
dtype: string
- name: gold_label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: UID
dtype: string
- name: pairID
dtype: string
- name: paradigmID
dtype: int16
splits:
- name: only_presupposition
num_bytes: 349018
num_examples: 1900
download_size: 335088
dataset_size: 349018
- config_name: presupposition_possessed_definites_existence
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: trigger
dtype: string
- name: trigger1
dtype: string
- name: trigger2
dtype: string
- name: presupposition
dtype: string
- name: gold_label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: UID
dtype: string
- name: pairID
dtype: string
- name: paradigmID
dtype: int16
splits:
- name: possessed_definites_existence
num_bytes: 362334
num_examples: 1900
download_size: 335088
dataset_size: 362334
- config_name: presupposition_possessed_definites_uniqueness
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: trigger
dtype: string
- name: trigger1
dtype: string
- name: trigger2
dtype: string
- name: presupposition
dtype: string
- name: gold_label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: UID
dtype: string
- name: pairID
dtype: string
- name: paradigmID
dtype: int16
splits:
- name: possessed_definites_uniqueness
num_bytes: 459403
num_examples: 1900
download_size: 335088
dataset_size: 459403
- config_name: presupposition_question_presupposition
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: trigger
dtype: string
- name: trigger1
dtype: string
- name: trigger2
dtype: string
- name: presupposition
dtype: string
- name: gold_label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: UID
dtype: string
- name: pairID
dtype: string
- name: paradigmID
dtype: int16
splits:
- name: question_presupposition
num_bytes: 397227
num_examples: 1900
download_size: 335088
dataset_size: 397227
- config_name: implicature_connectives
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold_label_log
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: gold_label_prag
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: spec_relation
dtype: string
- name: item_type
dtype: string
- name: trigger
dtype: string
- name: lexemes
dtype: string
splits:
- name: connectives
num_bytes: 221868
num_examples: 1200
download_size: 335088
dataset_size: 221868
- config_name: implicature_gradable_adjective
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold_label_log
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: gold_label_prag
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: spec_relation
dtype: string
- name: item_type
dtype: string
- name: trigger
dtype: string
- name: lexemes
dtype: string
splits:
- name: gradable_adjective
num_bytes: 153672
num_examples: 1200
download_size: 335088
dataset_size: 153672
- config_name: implicature_gradable_verb
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold_label_log
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: gold_label_prag
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: spec_relation
dtype: string
- name: item_type
dtype: string
- name: trigger
dtype: string
- name: lexemes
dtype: string
splits:
- name: gradable_verb
num_bytes: 180702
num_examples: 1200
download_size: 335088
dataset_size: 180702
- config_name: implicature_modals
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold_label_log
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: gold_label_prag
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: spec_relation
dtype: string
- name: item_type
dtype: string
- name: trigger
dtype: string
- name: lexemes
dtype: string
splits:
- name: modals
num_bytes: 178560
num_examples: 1200
download_size: 335088
dataset_size: 178560
- config_name: implicature_numerals_10_100
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold_label_log
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: gold_label_prag
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: spec_relation
dtype: string
- name: item_type
dtype: string
- name: trigger
dtype: string
- name: lexemes
dtype: string
splits:
- name: numerals_10_100
num_bytes: 208620
num_examples: 1200
download_size: 335088
dataset_size: 208620
- config_name: implicature_numerals_2_3
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold_label_log
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: gold_label_prag
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: spec_relation
dtype: string
- name: item_type
dtype: string
- name: trigger
dtype: string
- name: lexemes
dtype: string
splits:
- name: numerals_2_3
num_bytes: 188784
num_examples: 1200
download_size: 335088
dataset_size: 188784
- config_name: implicature_quantifiers
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: gold_label_log
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: gold_label_prag
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: spec_relation
dtype: string
- name: item_type
dtype: string
- name: trigger
dtype: string
- name: lexemes
dtype: string
splits:
- name: quantifiers
num_bytes: 176814
num_examples: 1200
download_size: 335088
dataset_size: 176814
---
# Dataset Card for IMPPRES
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/facebookresearch/Imppres)
- **Repository:** [Github](https://github.com/facebookresearch/Imppres)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.acl-main.768)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Over >25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. IMPPRES is an NLI dataset following the format of SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018) and XNLI (Conneau et al., 2018), which was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures.
### Supported Tasks and Leaderboards
Natural Language Inference.
### Languages
English.
## Dataset Structure
### Data Instances
The data consists of 2 configurations: implicature and presupposition.
Each configuration consists of several different sub-datasets:
**Pressupposition**
- all_n_presupposition
- change_of_state
- cleft_uniqueness
- possessed_definites_existence
- question_presupposition
- both_presupposition
- cleft_existence
- only_presupposition
- possessed_definites_uniqueness
**Implicature**
- connectives
- gradable_adjective
- gradable_verb
- modals
- numerals_10_100
- numerals_2_3
- quantifiers
Each sentence type in IMPPRES is generated according to a template that specifies the linear order of the constituents in the sentence. The constituents are sampled from a vocabulary of over 3000 lexical items annotated with grammatical features needed to ensure wellformedness. We semiautomatically generate IMPPRES using a codebase developed by Warstadt et al. (2019a) and significantly expanded for the BLiMP dataset (Warstadt et al., 2019b).
Here is an instance of the raw presupposition data from any sub-dataset:
```buildoutcfg
{
"sentence1": "All ten guys that proved to boast might have been divorcing.",
"sentence2": "There are exactly ten guys that proved to boast.",
"trigger": "modal",
"presupposition": "positive",
"gold_label": "entailment",
"UID": "all_n_presupposition",
"pairID": "9e",
"paradigmID": 0
}
```
and the raw implicature data from any sub-dataset:
```buildoutcfg
{
"sentence1": "That teenager couldn't yell.",
"sentence2": "That teenager could yell.",
"gold_label_log": "contradiction",
"gold_label_prag": "contradiction",
"spec_relation": "negation",
"item_type": "control",
"trigger": "modal",
"lexemes": "can - have to"
}
```
### Data Fields
**Presupposition**
There is a slight mapping from the raw data fields in the presupposition sub-datasets and the fields appearing in the HuggingFace Datasets.
When dealing with the HF Dataset, the following mapping of fields happens:
```buildoutcfg
"premise" -> "sentence1"
"hypothesis"-> "sentence2"
"trigger" -> "trigger" or "Not_In_Example"
"trigger1" -> "trigger1" or "Not_In_Example"
"trigger2" -> "trigger2" or "Not_In_Example"
"presupposition" -> "presupposition" or "Not_In_Example"
"gold_label" -> "gold_label"
"UID" -> "UID"
"pairID" -> "pairID"
"paradigmID" -> "paradigmID"
```
For the most part, the majority of the raw fields remain unchanged. However, when it comes to the various `trigger` fields, a new mapping was introduced.
There are some examples in the dataset that only have the `trigger` field while other examples have the `trigger1` and `trigger2` field without the `trigger` or `presupposition` field.
Nominally, most examples look like the example in the Data Instances section above. Occassionally, however, some examples will look like:
```buildoutcfg
{
'sentence1': 'Did that committee know when Lissa walked through the cafe?',
'sentence2': 'That committee knew when Lissa walked through the cafe.',
'trigger1': 'interrogative',
'trigger2': 'unembedded',
'gold_label': 'neutral',
'control_item': True,
'UID': 'question_presupposition',
'pairID': '1821n',
'paradigmID': 95
}
```
In this example, `trigger1` and `trigger2` appear and `presupposition` and `trigger` are removed. This maintains the length of the dictionary.
To account for these examples, we have thus introduced the mapping above such that all examples accessed through the HF Datasets interface will have the same size as well as the same fields.
In the event that an example does not have a value for one of the fields, the field is maintained in the dictionary but given a value of `Not_In_Example`.
To illustrate this point, the example given in the Data Instances section above would look like the following in the HF Datasets:
```buildoutcfg
{
"premise": "All ten guys that proved to boast might have been divorcing.",
"hypothesis": "There are exactly ten guys that proved to boast.",
"trigger": "modal",
"trigger1": "Not_In_Example",
"trigger2": "Not_In_Example"
"presupposition": "positive",
"gold_label": "entailment",
"UID": "all_n_presupposition",
"pairID": "9e",
"paradigmID": 0
}
```
Below is description of the fields:
```buildoutcfg
"premise": The premise.
"hypothesis": The hypothesis.
"trigger": A detailed discussion of trigger types appears in the paper.
"trigger1": A detailed discussion of trigger types appears in the paper.
"trigger2": A detailed discussion of trigger types appears in the paper.
"presupposition": positive or negative.
"gold_label": Corresponds to entailment, contradiction, or neutral.
"UID": Unique id.
"pairID": Sentence pair ID.
"paradigmID": ?
```
It is not immediately clear what the difference is between `trigger`, `trigger1`, and `trigger2` is or what the `paradigmID` refers to.
**Implicature**
The `implicature` fields only have the mapping below:
```buildoutcfg
"premise" -> "sentence1"
"hypothesis"-> "sentence2"
```
Here is a description of the fields:
```buildoutcfg
"premise": The premise.
"hypothesis": The hypothesis.
"gold_label_log": Gold label for a logical reading of the sentence pair.
"gold_label_prag": Gold label for a pragmatic reading of the sentence pair.
"spec_relation": ?
"item_type": ?
"trigger": A detailed discussion of trigger types appears in the paper.
"lexemes": ?
```
### Data Splits
As the dataset was created to test already trained models, the only split that exists is for testing.
## Dataset Creation
### Curation Rationale
IMPPRES was created to evaluate how well trained NLI models recognize several classes of presuppositions and scalar implicatures.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The annotations were generated semi-automatically.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
IMPPRES is available under a Creative Commons Attribution-NonCommercial 4.0 International Public License ("The License"). You may not use these files except in compliance with the License. Please see the LICENSE file for more information before you use the dataset.
### Citation Information
```buildoutcfg
@inproceedings{jeretic-etal-2020-natural,
title = "Are Natural Language Inference Models {IMPPRESsive}? {L}earning {IMPlicature} and {PRESupposition}",
author = "Jereti\v{c}, Paloma and
Warstadt, Alex and
Bhooshan, Suvrat and
Williams, Adina",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.768",
doi = "10.18653/v1/2020.acl-main.768",
pages = "8690--8705",
abstract = "Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of 32K semi-automatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by {``}some{''} as entailments. For some presupposition triggers like {``}only{''}, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.",
}
```
### Contributions
Thanks to [@aclifton314](https://github.com/aclifton314) for adding this dataset. |
inquisitive_qg | 2022-11-18T20:09:50.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"question-generation",
"region:us"
] | null | A dataset of about 20k questions that are elicited from readers as they naturally read through a document sentence by sentence. Compared to existing datasets, INQUISITIVE questions target more towards high-level (semantic and discourse) comprehension of text. Because these questions are generated while the readers are processing the information, the questions directly communicate gaps between the reader’s and writer’s knowledge about the events described in the text, and are not necessarily answered in the document itself. This type of question reflects a real-world scenario: if one has questions during reading, some of them are answered by the text later on, the rest are not, but any of them would help further the reader’s understanding at the particular point when they asked it. This resource could enable question generation models to simulate human-like curiosity and cognitive processing, which may open up a new realm of applications. | @InProceedings{ko2020inquisitive,
author = {Ko, Wei-Jen and Chen, Te-Yuan and Huang, Yiyan and Durrett, Greg and Li, Junyi Jessy},
title = {Inquisitive Question Generation for High Level Text Comprehension},
booktitle = {Proceedings of EMNLP},
year = {2020},
} | null | 1 | 3 | ---
pretty_name: InquisitiveQg
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: inquisitive
tags:
- question-generation
dataset_info:
features:
- name: id
dtype: int32
- name: article_id
dtype: int32
- name: article
dtype: string
- name: sentence_id
dtype: int32
- name: sentence
dtype: string
- name: span
dtype: string
- name: question
dtype: string
- name: span_start_position
dtype: int32
- name: span_end_position
dtype: int32
config_name: plain_text
splits:
- name: train
num_bytes: 66099232
num_examples: 15931
- name: validation
num_bytes: 8904329
num_examples: 1991
- name: test
num_bytes: 7167203
num_examples: 1894
download_size: 7085941
dataset_size: 82170764
---
# Dataset Card for InquisitiveQg
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
- **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
isizulu_ner_corpus | 2023-01-25T14:33:13.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:zu",
"license:other",
"region:us"
] | null | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | @inproceedings{isizulu_ner_corpus,
author = {A.N. Manzini and
Roald Eiselen},
title = {NCHLT isiZulu Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/319},
} | null | 0 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- zu
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Isizulu Ner Corpus
license_details: Creative Commons Attribution 2.5 South Africa
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
config_name: isizulu_ner_corpus
splits:
- name: train
num_bytes: 4038876
num_examples: 10956
download_size: 25097584
dataset_size: 4038876
---
# Dataset Card for Isizulu Ner Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Isizulu Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/319)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za)
### Dataset Summary
The isizulu Ner Corpus is a Zulu dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Zulu language. The dataset uses CoNLL shared task annotation standards.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Zulu.
## Dataset Structure
### Data Instances
A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
{'id': '0',
'ner_tags': [7, 8, 0, 0, 0],
'tokens': ['Lesi', 'sigaba', 'se-website', ',', 'esikhonjiswe']
}
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.
### Data Splits
The data was not split.
## Dataset Creation
### Curation Rationale
The data was created to help introduce resources to new language - zulu.
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data is based on South African government domain and was crawled from gov.za websites.
#### Who are the source language producers?
The data was produced by writers of South African government websites - gov.za
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data was annotated during the NCHLT text resource development project.
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).
See: [more information](http://www.nwu.ac.za/ctext)
### Licensing Information
The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode)
### Citation Information
```
@inproceedings{isizulu_ner_corpus,
author = {A.N. Manzini and
Roald Eiselen},
title = {NCHLT isiZulu Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/319},
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset. |
makhzan | 2022-11-03T16:07:47.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"... | null | An Urdu text corpus for machine learning, natural language processing and linguistic analysis. | null | null | 0 | 3 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ur
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: makhzan
dataset_info:
features:
- name: file_id
dtype: string
- name: metadata
dtype: string
- name: title
dtype: string
- name: num-words
dtype: int64
- name: contains-non-urdu-languages
dtype: string
- name: document_body
dtype: string
splits:
- name: train
num_bytes: 35637310
num_examples: 5522
download_size: 15187763
dataset_size: 35637310
---
# Dataset Card for makhzan
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://matnsaz.net/en/makhzan
- **Repository:** https://github.com/zeerakahmed/makhzan
- **Paper:** [More Information Needed]
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** Zeerak Ahmed
### Dataset Summary
An Urdu text corpus for machine learning, natural language processing and linguistic analysis.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
ur
## Dataset Structure
### Data Instances
```
{
"contains-non-urdu-languages": "No",
"document_body":
"
<body>
<section>
<p>بنگلہ دیش کی عدالتِ عالیہ نے طلاق کے ایک مقدمے کا فیصلہ کرتے ہوئے علما کے فتووں کو غیر قانونی قرار دیا ہے۔ عدالت نے پارلیمنٹ سے یہ درخواست کی ہے کہ وہ جلد ایسا قانون وضع کرے کہ جس کے بعد فتویٰ بازی قابلِ دست اندازیِ پولیس جرم بن جائے۔ بنگلہ دیش کے علما نے اس فیصلے پر بھر پور ردِ عمل ظاہرکرتے ہوئے اس کے خلاف ملک گیر تحریک چلانے کا اعلان کیا ہے۔ اس ضمن میں علما کی ایک تنظیم ”اسلامک یونٹی الائنس“ نے متعلقہ ججوں کو مرتد یعنی دین سے منحرف اور دائرۂ اسلام سے خارج قرار دیا ہے۔</p>
<p>فتوے کا لفظ دو موقعوں پر استعمال ہوتا ہے۔ ایک اس موقع پر جب کوئی صاحبِ علم شریعت کے کسی مئلے کے بارے میں اپنی رائے پیش کرتا ہے۔ دوسرے اس موقع پر جب کوئی عالمِ دین کسی خاص واقعے کے حوالے سے اپنا قانونی فیصلہ صادر کرتا ہے۔ ایک عرصے سے ہمارے علما کے ہاں اس دوسرے موقعِ استعمال کا غلبہ ہو گیا ہے۔ اس کا نتیجہ یہ نکلا ہے کہ اس لفظ کا رائے یا نقطۂ نظر کے مفہوم میں استعمال کم و بیش متروک ہو گیا ہے۔ چنانچہ اب فتوے کا مطلب ہی علما کی طرف سے کسی خاص مألے یا واقعے کے بارے میں حتمی فیصلے کا صدور سمجھا جاتا ہے۔ علما اسی حیثیت سے فتویٰ دیتے ہیں اور عوام الناس اسی اعتبار سے اسے قبول کرتے ہیں۔ اس صورتِ حال میں ہمارے نزدیک، چند مسائل پیدا ہوتے ہیں۔ اس سے پہلے کہ ہم مذکورہ فیصلے کے بارے میں اپنا تاثر بیان کریں، یہ ضروری معلوم ہوتا ہے کہ مختصر طور پر ان مسائل کا جائزہ لے لیا جائے۔</p>
<p>پہلا مألہ یہ پیدا ہوتا ہے کہ قانون سازی اور شرعی فیصلوں کا اختیار ایسے لوگوں کے ہاتھ میں آجاتا ہے جو قانون کی رو سے اس کے مجاز ہی نہیں ہوتے۔ کسی میاں بیوی کے مابین طلاق کے مألے میں کیا طلاق واقع ہوئی ہے یا نہیں ہوئی؟ ان کا نکاح قائم ہے یا باطل ہو گیا ہے؟ رمضان یا عید کا چاند نظر آیا ہے یا نہیں آیا؟کوئی مسلمان اپنے کسی قول یا اقدام کی وجہ سے کہیں دائرۂ اسلام سے خارج اورنتیجۃً مسلم شہریت کے قانونی حقوق سے محروم تو نہیں ہو گیا؟ یہ اور اس نوعیت کے بہت سے دوسرے معاملات سر تا سر قانون اور عدالت سے متعلق ہوتے ہیں۔ علما کی فتویٰ سازی کے نتیجے میںیہ امور گویا حکومت اورعدلیہ کے ہاتھ سے نکل کر غیر متعلق افراد کے ہاتھوں میں آجاتے ہیں۔</p>
<p>دوسرا مألہ یہ پیدا ہوتا ہے کہ قانون کی حاکمیت کا تصور مجروح ہوتا ہے اور لوگوں میں قانون سے روگردانی کے رجحانات کو تقویت ملتی ہے۔ اس کی وجہ یہ ہے کہ قانون اپنی روح میں نفاذ کا متقاضی ہوتا ہے۔ اگر اسے نفاذ سے محروم رکھا جائے تو اس کی حیثیت محض رائے اور نقطۂ نظر کی سی ہوتی ہے۔ غیر مجاز فرد سے صادر ہونے والا فتویٰ یا قانون حکومت کی قوتِ نافذہ سے محروم ہوتا ہے۔ اس کی خلاف ورزی پر کسی قسم کی سزا کا خوف نہیں ہوتا۔ چنانچہ فتویٰ اگر مخاطب کی پسند کے مطابق نہ ہو تو اکثر وہ اسے ماننے سے انکار کر دیتا ہے۔ اس طرح وہ فتویٰ یا قانون بے توقیر ہوتا ہے۔ ایسے ماحول میں رہنے والے شہریوں میں قانون ناپسندی کا رجحان فروغ پاتا ہے اور جیسے ہی انھیں موقع ملتا ہے وہ بے دریغ قانون کی خلاف ورزی کر ڈالتے ہیں۔</p>
<p>تیسرامسئلہ یہ پیدا ہوتا ہے کہ اگرغیر مجاز افراد سے صادر ہونے والے فیصلوں کو نافذ کرنے کی کوشش کی جائے تو ملک میں بد نظمی اور انارکی کا شدید اندیشہ پیدا ہو جاتا ہے۔ جب غیر مجازافراد سے صادر ہونے والے قانونی فیصلوں کو حکومتی سرپرستی کے بغیر نافذ کرنے کی کوشش کی جاتی ہے تو اپنے عمل سے یہ اس بات کا اعلان ہوتا ہے کہ مرجعِ قانون و اقتدارتبدیل ہو چکا ہے۔ جب کوئی عالمِ دین مثال کے طور پر، یہ فتویٰ صادر کرتا ہے کہ سینما گھروں اور ٹی وی اسٹیشنوں کو مسمار کرنامسلمانوں کی ذمہ داری ہے، یا کسی خاص قوم کے خلاف جہاد فرض ہو چکا ہے، یا فلاں کی دی گئی طلاق واقع ہو گئی ہے اور فلاں کی نہیں ہوئی، یا فلاں شخص یا گروہ اپنا اسلامی تشخص کھو بیٹھا ہے تو وہ درحقیقت قانونی فیصلہ جاری کر رہا ہوتا ہے۔ دوسرے الفاظ میں، وہ ریاست کے اندر اپنی ایک الگ ریاست بنانے کا اعلان کر رہا ہوتا ہے۔ اس کا نتیجہ سوائے انتشار اور انارکی کے اور کچھ نہیں نکلتا۔ یہی وجہ ہے کہ جن علاقوں میں حکومت کی گرفت کمزور ہوتی ہے وہاں اس طرح کے فیصلوں کا نفاذ بھی ہو جاتا ہے اور حکومت منہ دیکھتی رہتی ہے۔</p>
<p>چوتھا مسئلہ یہ پیدا ہوتا ہے کہ مختلف مذہبی مسالک کی وجہ سے ایک ہی معاملے میں مختلف اور متضاد فتوے منظرِ عام پر آتے ہیں۔ یہ تو ہمارے روز مرہ کی بات ہے کہ ایک ہی گروہ کو بعض علماے دین کافر قرار دیتے ہیں اور بعض مسلمان سمجھتے ہیں۔ کسی شخص کے منہ سے اگر ایک موقع پر طلاق کے الفاظ تین بار نکلتے ہیں تو بعض علما اس پر ایک طلاق کا حکم لگا کر رجوع کا حق باقی رکھتے ہیں اور بعض تین قرار دے کررجوع کو باطل قرار دیتے ہیں۔ یہ صورتِ حال ایک عام آدمی کے لیے نہایت دشواریاں پیدا کر دیتی ہے۔</p>
<p>پانچواں مسئلہ یہ پیدا ہوتا ہے کہ حکمران اگر دین و شریعت سے کچھ خاص دلچسپی نہ رکھتے ہوں تو وہ اس صورتِ حال میں شریعت کی روشنی میں قانون سازی کی طرف متوجہ نہیں ہوتے۔ کام چل رہا ہے کے اصول پر وہ اس طریقِ قانون سازی سے سمجھوتاکیے رہتے ہیں۔ اس کا نتیجہ یہ نکلتا ہے کہ حکومتی ادارے ضروری قانون سازی کے بارے میں بے پروائی کا رویہ اختیار کرتے ہیں اور قوانین اپنے فطری ارتقا سے محروم رہتے ہیں۔</p>
<p>چھٹا مألہ یہ پیدا ہوتا ہے کہ رائج الوقت قانون اور عدالتوں کی توہین کے امکانات پیدا ہو جاتے ہیں۔ جب کسی مسئلے میں عدالتیں اپنا فیصلہ سنائیں اور علما اسے باطل قرار دیتے ہوئے اس کے برعکس اپنا فیصلہ صادر کریں تو اس سے عدالتوں کا وقار مجروح ہوتا ہے۔ اس کا مطلب یہ ہوتا ہے کہ کوئی شہری عدلیہ کو چیلنج کرنے کے لیے کھڑا ہو گیا ہے۔</p>
<p>ان مسائل کے تناظر میں بنگلہ دیش کی عدالتِ عالیہ کا فیصلہ ہمارے نزدیک، امت کی تاریخ میں ایک عظیم فیصلہ ہے۔ جناب جاوید احمد صاحب غامدی نے اسے بجا طور پر صدی کا بہترین فیصلہ قرار دیا ہے۔ بنگلہ دیش کی عدالت اگر علما کے فتووں اور قانونی فیصلوں پر پابندی لگانے کے بجائے، ان کے اظہارِ رائے پر پابندی عائدکرتی تو ہم اسے صدی کا بدترین فیصلہ قرار دیتے اور انھی صفحات میں بے خوفِ لومۃ و لائم اس پر نقد کر رہے ہوتے۔</p>
<p>موجودہ زمانے میں امتِ مسلمہ کا ایک بڑا المیہ یہ ہے کہ اس کے علما اپنی اصل ذمہ داری کو ادا کرنے کے بجائے ان ذمہ داریوں کو ادا کرنے پر مصر ہیں جن کے نہ وہ مکلف ہیں اور نہ اہل ہیں۔ قرآن و سنت کی رو سے علما کی اصل ذمہ داری دعوت و تبلیغ، انذار و تبشیر اور تعلیم و تحقیق ہے۔ ان کا کام سیاست نہیں، بلکہ سیاست دانوں کو دین کی رہنمائی سے آگاہی ہے؛ ان کا کام حکومت نہیں، بلکہ حکمرانوں کی اصلاح کی کوشش ہے؛ ان کا کام جہاد و قتال نہیں، بلکہ جہادکی تعلیم اور جذبۂ جہاد کی بیداری ہے؛ اسی طرح ان کا کام قانون سازی اور فتویٰ بازی نہیں بلکہ تحقیق و اجتہاد ہے۔ گویا انھیں قرآنِ مجیدکامفہوم سمجھنے، سنتِ ثابتہ کا مدعا متعین کرنے اور قولِ پیغمبر کا منشامعلوم کرنے کے لیے تحقیق کرنی ہے اور جن امور میں قرآن و سنت خاموش ہیں ان میں اپنی عقل و بصیرت سے اجتہادی آراقائم کرنی ہیں۔ ان کی کسی تحقیق یا اجتہاد کو جب عدلیہ یا پارلیمنٹ قبول کرے گی تو وہ قانون قرار پائے گا۔ اس سے پہلے اس کی حیثیت محض ایک رائے کی ہوگی۔ اس لیے اسے اسی حیثیت سے پیش کیا جائے گا۔</p>
<p>اس کا مطلب یہ ہے کہ کوئی حکم نہیں لگایا جائے گا، کوئی فیصلہ نہیں سنایا جائے گا، کوئی فتویٰ نہیں دیا جائے گا، بلکہ طالبِ علمانہ لب و لہجے میں محض علم و استدلال کی بنا پر اپنا نقطۂ نظر پیش کیا جائے گا۔ یہ نہیں کہا جائے گا کہ فلاں شخص کافر ہے، بلکہ اس کی اگر ضرورت پیش آئے تو یہ کہا جائے گا کہ فلاں شخص کا فلاں عقیدہ کفر ہے۔ یہ نہیں کہا جائے گا کہ فلاں آدمی دائرۂ اسلام سے خارج ہو گیا ہے، بلکہ یہ کہا جائے گا کہ فلاں آدمی کا فلاں نقطۂ نظر اسلام کے دائرے میں نہیں آتا۔ یہ نہیں کہا جائے گا فلاں آدمی مشرک ہے، بلکہ یہ کہا جائے گا فلاں نظریہ یا فلاں طرزِ عمل شرک ہے۔ یہ نہیں کہا جائے گا کہ زید کی طرف سے دی گئی ایک وقت کی تین طلاقیں واقع ہو گئی ہیں، بلکہ یہ کہا جائے گا کہ ایک وقت کی تین طلاقیں واقع ہو نی چاہییں۔</p>
<p>حکم لگانا، فیصلہ سنانا، قانون وضع کرنا اورفتویٰ جاری کرنا درحقیقت، عدلیہ اور حکومت کا کام ہے کسی عالمِ دین یا کسی اور غیر مجاز فرد کی طرف سے اس کام کو انجام دینے کی کوشش سراسر تجاوز ہے۔ خلافتِ راشدہ کے زمانے میں اس اصول کو ہمیشہ ملحوظ رکھا گیا۔ شاہ ولی اللہ محدث دہلوی اپنی کتاب ”ازالتہ الخفا ء“ میں لکھتے ہیں:</p>
<blockquote>
<p>”اس زمانے تک وعظ اور فتویٰ خلیفہ کی رائے پر موقوف تھا۔ خلیفہ کے حکم کے بغیر نہ وعظ کہتے تھے اور نہ فتویٰ دیتے تھے۔ بعد میں خلیفہ کے حکم کے بغیر وعظ کہنے اور فتویٰ دینے لگے اور فتویٰ کے معاملے میں جماعت (مجلسِ شوریٰ) کے مشورہ کی جو صورت پہلے تھی وہ باقی نہ رہی——- (اس زمانے میں) جب کوئی اختلافی صورت نمودار ہوتی، خلیفہ کے سامنے معاملہ پیش کرتے، خلیفہ اہلِ علم و تقویٰ سے مشورہ کرنے کے بعد ایک رائے قائم کرتا اور وہی سب لوگوں کی رائے بن جاتی۔ حضرت عثمان کی شہادت کے بعد ہر عالم بطورِ خود فتویٰ دینے لگا اور اس طرح مسلمانوں میں اختلاف برپا ہوا۔“ (بحوالہ ”اسلامی ریاست میں فقہی اختلافات کا حل“، مولاناامین احسن اصلاحی، ص۳۲)</p>
</blockquote>
</section>
</body>
",
"file_id": "0001.xml",
"metadata":
"
<meta>
<title>بنگلہ دیش کی عدالت کا تاریخی فیصلہ</title>
<author>
<name>سید منظور الحسن</name>
<gender>Male</gender>
</author>
<publication>
<name>Mahnama Ishraq February 2001</name>
<year>2001</year>
<city>Lahore</city>
<link>https://www.javedahmedghamidi.org/#!/ishraq/5adb7341b7dd1138372db999?articleId=5adb7452b7dd1138372dd6fb&year=2001&decade=2000</link>
<copyright-holder>Al-Mawrid</copyright-holder>
</publication>
<num-words>1694</num-words>
<contains-non-urdu-languages>No</contains-non-urdu-languages>
</meta>
",
"num-words": 1694,
"title": "بنگلہ دیش کی عدالت کا تاریخی فیصلہ"
}
```
### Data Fields
```file_id (str)```: Document file_id corresponding to filename in repository.
```metadata(str)```: XML formatted string containing metadata on the document such as the document's title, information about the author and publication, as well as other potentially useful facts such as the number of Urdu words in the document and whether the document contains text in any other languages.
```title (str)```: Title of the document.
```num-words (int)```: Number of words in document.
```contains-non-urdu-languages (str)```: ```Yes``` if document contains words other than urdu, ```No``` otherwise.
```document_body```: XML formatted body of the document. Details below:
The document is divided into ```<section>``` elements. In general the rule is that a clear visual demarkation in the original text (such as a page break, or a horizontal rule) is used to indicate a section break. A heading does not automatically create a new section.
Each paragraph is a ```<p>``` element.
Headings are wrapped in an ```<heading>``` element.
Blockquotes are wrapped in a ```<blockquote>``` element. Blockquotes may themselves contain other elements.
Lists are wrapped in an ```<list>```. Individual items in each list are wrapped in an ```<li>``` element.
Poetic verses are wrapped in a ```<verse>``` element. Each verse is on a separate line but is not wrapped in an individual element.
Tables are wrapped in a ```<table>``` element. A table is divided into rows marked by ```<tr>``` and columns marked by ```<td>```.
Text not in the Urdu language is wrapped in an ```<annotation>``` tag (more below).
```<p>, <heading>, <li>, <td>``` and ```<annotation>``` tags are inline with the text (i.e. there is no new line character before and after the tag). Other tags have a new line after the opening and before the closing tag.
Due to the use of XML syntax, ```<```, ```>``` and ```&``` characters have been escaped as ```<```;, ```>```;, and ```&```; respectively. This includes the use of these characters in URLs inside metadata.
### Data Splits
All the data is in one split ```train```
## Dataset Creation
### Curation Rationale
All text in this repository has been selected for quality of language, upholding high editorial standards. Given the poor quality of most published Urdu text in digital form, this selection criteria allows the use of this text for natural language processing, and machine learning applications without the need to address fundamental quality issues in the text.
We have made efforts to ensure this text is as broadly representative as possible. Specifically we have attempted to select for as many authors as possible, and diversity in the gender of the author, as well as years and city of publication. This effort is imperfect, and we appreciate any attempts at pointing us to resources that can help diversify this text further.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Makhzan has been started with generous initial donations of text from two renowned journals Bunyad, from the Gurmani Center of Literature and Languages at the Lahore University of Management Sciences (LUMS), and Ishraq, from the Al-Mawrid Institute. This choice of sources allowed us to get a diversity of voices even in a small initial corpus, while ensuring the highest editorial standards available in published Urdu text. As a result your models can also maintain high linguistic standards.
### Annotations
#### Annotation process
Text is structured and annotated using XML syntax. The ontology of elements used is loosely based around HTML, with simplifications made when HTML's specificity is not needed, and additions made to express common occurences in this corpus that would be useful for linguistic analysis. The semantic tagging of text is editorial in nature, which is to say that another person semantically tagging the text may do so differently. Effort has been made however to ensure consistency, and to retain the original meaning of the text while making it easy to parse through linguistically different pieces of text for analysis.
Annotations have been made inline using an ```<annotation>``` element.
A language (lang) attribute is added to the ```<annotation>``` element to indicate text in other languages (such as quoted text or technical vocabulary presented in other languages and scripts). The attribute value a two-character ISO 639-1 code. So the resultant annotation for an Arabic quote for example, will be ```<annotation lang="ar"></annotation>```.
A type (type) attributed is added to indicate text that is not in a language per se but is not Urdu text. URLs for example are wrapped in an ```<annotation type="url">``` tag.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
A few of the files do not have valid XML and cannot be loaded. This issue is tracked [here](https://github.com/zeerakahmed/makhzan/issues/28)
## Additional Information
### Dataset Curators
Zeerak Ahmed
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{makhzan,
title={Maḵẖzan},
howpublished = "\url{https://github.com/zeerakahmed/makhzan/}",
}
```
### Contributions
Thanks to [@arkhalid](https://github.com/arkhalid) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.