id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
rombodawg/Platypus_Evol | 2023-08-22T04:41:17.000Z | [
"license:other",
"region:us"
] | rombodawg | null | null | null | 1 | 12 | ---
license: other
---
Its this data set in evol intruct:
https://huggingface.co/datasets/garage-bAInd/Open-Platypus |
mystic-leung/medical_cord19 | 2023-09-14T03:00:13.000Z | [
"task_categories:summarization",
"language:aa",
"license:openrail",
"medical",
"region:us"
] | mystic-leung | null | null | null | 2 | 12 | ---
license: openrail
task_categories:
- summarization
language:
- aa
tags:
- medical
---
## Description
This dataset contains large amounts of biomedical abstracts and corresponding summaries. |
mariogiordano/bert-sentiment-analysis | 2023-09-07T17:32:37.000Z | [
"region:us"
] | mariogiordano | null | null | null | 0 | 12 | Entry not found |
minh21/cpgQA-v1.0-unique-context-test-10-percent | 2023-09-08T13:58:38.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: title
dtype: string
- name: id
dtype: int64
- name: question
dtype: string
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 1292366
num_examples: 988
- name: test
num_bytes: 143063
num_examples: 109
download_size: 188281
dataset_size: 1435429
---
# Dataset Card for "cpgQA-v1.0-unique-context-test-10-percent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sazirarrwth99/llama_2_triple_test | 2023-09-01T16:54:50.000Z | [
"region:us"
] | sazirarrwth99 | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 21567851
num_examples: 35387
download_size: 8085737
dataset_size: 21567851
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama_2_triple_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mickume/alt_potterverse_tk | 2023-09-01T08:21:23.000Z | [
"region:us"
] | mickume | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 91409988.0
num_examples: 11153
- name: test
num_bytes: 10163040.0
num_examples: 1240
download_size: 47889519
dataset_size: 101573028.0
---
# Dataset Card for "alt_potterverse_tk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TiagoAdriano/entrevistas_medicas_yap | 2023-09-01T15:48:49.000Z | [
"task_categories:text-classification",
"language:en",
"medical",
"region:us"
] | TiagoAdriano | null | null | null | 1 | 12 | ---
task_categories:
- text-classification
language:
- en
tags:
- medical
pretty_name: Medical_enterviws
--- |
dim/yandex_q_10k | 2023-09-01T21:11:57.000Z | [
"region:us"
] | dim | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: description
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 14596364.404151473
num_examples: 10000
download_size: 7769074
dataset_size: 14596364.404151473
---
# Dataset Card for "yandex_q_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Goorm-AI-04/Drone_RCS_Measurement | 2023-09-23T00:32:06.000Z | [
"region:us"
] | Goorm-AI-04 | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: Heli_HH
path: data/Heli_HH-*
- split: Y600_HH
path: data/Y600_HH-*
- split: Hexa_VV
path: data/Hexa_VV-*
- split: M100_HV
path: data/M100_HV-*
- split: M100_VH
path: data/M100_VH-*
- split: P4P_HH
path: data/P4P_HH-*
- split: battery_HH
path: data/battery_HH-*
- split: Hexa_HH
path: data/Hexa_HH-*
- split: Walkera_VV
path: data/Walkera_VV-*
- split: Walkera_HH
path: data/Walkera_HH-*
- split: M100_VV
path: data/M100_VV-*
- split: Y600_VV
path: data/Y600_VV-*
- split: Mavic_HH
path: data/Mavic_HH-*
- split: P4P_VV
path: data/P4P_VV-*
- split: Parrot_HH
path: data/Parrot_HH-*
- split: F450_HH
path: data/F450_HH-*
- split: M100_HH
path: data/M100_HH-*
dataset_info:
features:
- name: f
dtype: int64
- name: theta
dtype: int64
- name: phi
dtype: int64
- name: RCS
dtype: float64
splits:
- name: Heli_HH
num_bytes: 15725280
num_examples: 491415
- name: Y600_HH
num_bytes: 16594080
num_examples: 518565
- name: Hexa_VV
num_bytes: 16594080
num_examples: 518565
- name: M100_HV
num_bytes: 16594080
num_examples: 518565
- name: M100_VH
num_bytes: 16594080
num_examples: 518565
- name: P4P_HH
num_bytes: 16594080
num_examples: 518565
- name: battery_HH
num_bytes: 3974880
num_examples: 124215
- name: Hexa_HH
num_bytes: 15725280
num_examples: 491415
- name: Walkera_VV
num_bytes: 16594080
num_examples: 518565
- name: Walkera_HH
num_bytes: 16594080
num_examples: 518565
- name: M100_VV
num_bytes: 16594080
num_examples: 518565
- name: Y600_VV
num_bytes: 16594080
num_examples: 518565
- name: Mavic_HH
num_bytes: 15725280
num_examples: 491415
- name: P4P_VV
num_bytes: 16594080
num_examples: 518565
- name: Parrot_HH
num_bytes: 15725280
num_examples: 491415
- name: F450_HH
num_bytes: 15725280
num_examples: 491415
- name: M100_HH
num_bytes: 16594080
num_examples: 518565
download_size: 4506112
dataset_size: 265136160
---
# Dataset Card for "Drone_RCS_Measurement"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nampdn-ai/mini-FLAN | 2023-09-05T04:29:00.000Z | [
"region:us"
] | nampdn-ai | null | null | null | 2 | 12 | Entry not found |
deven367/babylm-10M | 2023-09-06T03:25:14.000Z | [
"region:us"
] | deven367 | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 11102812
num_examples: 66392
- name: valid
num_bytes: 54930583
num_examples: 986022
- name: test
num_bytes: 59992087
num_examples: 1008854
download_size: 34622342
dataset_size: 126025482
---
# Dataset Card for "babylm-10M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Malmika/bio_dataset | 2023-09-16T10:10:27.000Z | [
"region:us"
] | Malmika | null | null | null | 0 | 12 | Entry not found |
etanios/shortened-pubmed | 2023-09-08T14:37:46.000Z | [
"region:us"
] | etanios | null | null | null | 0 | 12 | Entry not found |
Falah/portrait_prompts | 2023-09-09T07:01:09.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 22213518
num_examples: 100000
download_size: 2797158
dataset_size: 22213518
---
# Dataset Card for "portrait_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ugshanyu/url | 2023-09-10T15:13:53.000Z | [
"region:us"
] | ugshanyu | null | null | null | 0 | 12 | Entry not found |
prognosis/medquad-alpaca | 2023-09-11T08:18:54.000Z | [
"region:us"
] | prognosis | null | null | null | 0 | 12 | Entry not found |
ibm/AttaQ | 2023-09-26T13:29:13.000Z | [
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|Anthropic/hh-rlhf",
"language:en",
"license:mit",
"region:us"
] | ibm | null | null | null | 0 | 12 | ---
license: mit
language: en
multilinguality: monolingual
source_datasets: extended|Anthropic/hh-rlhf
task_categories:
- text-generation
size_categories:
- 1K<n<10K
---
dataset_info:
features:
- name: label
dtype: string
- name: input
dtype: string
- name: additional_info
dtype: string
# Dataset Card for "AttaQ"
### Dataset Summary
Attack Questions (AttaQ) dataset is a set of adversarial examples in the form of questions designed to provoke harmful or inappropriate responses from LLM for red-teaming.
Red teaming goal is to evaluate LLM’s weaknesses that can lead to undesirable behaviors in real time. This is done by attacking LLM with adversarial attacks from this dataset and evaluating LLM’s response.
The datasets contains 1402 adversarial attacks for AI generative models, labeled to 7 categories (deception, discrimination, harmful information, substance abuse, sexual content, pii, violence).
AttaQ dataset is constructed from two parts:
First, we methodologically extracted attacks from [Anthropic's hh-rlhf dataset](https://github.com/anthropics/hh-rlhf/tree/master/red-team-attempts) dataset and semi-automatically labeled to 7 categories.
Secondly, we generated actions per crime (from [Wikipedia Crimes Page](https://en.wikipedia.org/wiki/Crime)) and then use the actions to generate attack questions.
Warnings:
1) The data contains offensive and upsetting content by nature therefore it may not be easy to read. Please read them in accordance with your own personal risk tolerance.
2) LLM's response to the AttaQ samples in many cases is harmful and/or violent.
3) This dataset is a representative subset of all possible attacks. There are other attacks that can cause LLM to answer harmful or inappropriate responses.
Restrictions:
Red teaming community’s goal is making models less harmful. We restrict the usage of the dataset for making models less harmful.
### Data Fields
#### AttaQ
- `label`: corresponding label of adversarial question
- `input`: adversarial question
- `additional_info`: source of the adversarial question
### Citation Information
TBD
|
PL-MTEB/hate_speech_pl-clustering | 2023-09-12T13:05:06.000Z | [
"license:cc-by-nc-sa-3.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 12 | ---
license: cc-by-nc-sa-3.0
---
|
lum-ai/metal-python-synthetic-explanations-gpt4 | 2023-09-15T17:08:25.000Z | [
"region:us"
] | lum-ai | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: id
dtype: string
- name: chunk_id
dtype: string
- name: model_name
dtype: string
- name: temperature
dtype: int64
- name: max_tokens
dtype: float64
- name: use_raw_code
dtype: bool
- name: description
dtype: string
- name: created_at
dtype: timestamp[ns]
- name: raw_text
dtype: string
- name: text
dtype: string
- name: code
dtype: string
- name: kind
dtype: string
- name: start_text
dtype: int64
- name: stop_text
dtype: int64
- name: start_code
dtype: int64
- name: stop_code
dtype: int64
- name: domain
dtype: string
- name: full_name
dtype: string
- name: license
struct:
- name: key
dtype: string
- name: name
dtype: string
- name: node_id
dtype: string
- name: spdx_id
dtype: string
- name: url
dtype: string
- name: stargazers_count
dtype: int64
- name: filename
dtype: string
- name: chunk_type
dtype: string
splits:
- name: train
num_bytes: 2896865017
num_examples: 313681
- name: validation
num_bytes: 173850658
num_examples: 18952
- name: test
num_bytes: 339322116
num_examples: 36740
download_size: 76607138
dataset_size: 3410037791
---
# Dataset Card for "metal-python-synthetic-explanations-gpt4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tmon546596046/processed_bert_dataset | 2023-09-12T08:32:54.000Z | [
"region:us"
] | tmon546596046 | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 67615200.0
num_examples: 18782
download_size: 16390157
dataset_size: 67615200.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
crewdon/completeSynthetic | 2023-09-12T17:56:56.000Z | [
"region:us"
] | crewdon | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 332515
num_examples: 1570
download_size: 101432
dataset_size: 332515
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "newCompleteSyntheticDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Kranajan/test-llama2-1k | 2023-09-12T22:08:11.000Z | [
"region:us"
] | Kranajan | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 104225
num_examples: 284
download_size: 55095
dataset_size: 104225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sunghuncsa/skr_president | 2023-09-13T10:06:18.000Z | [
"region:us"
] | sunghuncsa | null | null | null | 0 | 12 | Entry not found |
nikchar/paper_test_assym_roberta_3_epochs_results | 2023-09-13T12:10:27.000Z | [
"region:us"
] | nikchar | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: claim
dtype: string
- name: evidence_wiki_url
dtype: string
- name: text
dtype: string
- name: retrieved_evidence_title
sequence: string
- name: retrieved_evidence_text
sequence: string
- name: labels
dtype: int64
- name: Retrieval_Success
dtype: bool
- name: Predicted_Labels
dtype: int64
- name: Predicted_Labels_Each_doc
sequence: int64
splits:
- name: train
num_bytes: 73601741
num_examples: 11073
download_size: 34426547
dataset_size: 73601741
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "paper_test_assym_roberta_3_epochs_results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rohanbalkondekar/generate_json | 2023-09-15T08:58:56.000Z | [
"region:us"
] | rohanbalkondekar | null | null | null | 0 | 12 | Entry not found |
gfbati/AjwaOrMedjool | 2023-10-09T07:47:47.000Z | [
"task_categories:image-classification",
"task_categories:tabular-classification",
"language:ar",
"language:en",
"license:cc-by-4.0",
"doi:10.57967/hf/1116",
"region:us"
] | gfbati | null | null | null | 1 | 12 | ---
license: cc-by-4.0
task_categories:
- image-classification
- tabular-classification
language:
- ar
- en
---
The dataset contains three subsets:
1) a dataset containing hand-crafted features to classify two types of organic dates (Ajwa or Medjool);
2) a dataset containing tabular data with features created automatically using deep learning to classify the two organic date types (Ajwa or Medjool);
3) a dataset for images of Ajwa and Medjool.
This study is considered as the first work in Arabic using shallow machine learning and deep learning to create accurate models for classifying organic Saudi dates, which would enable scholars, researchers, and developers to create machine learning applications for classifying Saudi dates in various forms like websites, mobile apps, microcontrollers, tiny machine learning and internet of things applications.
Please cite the following paper: Bati GF. Ajwa or Medjool: a binary balanced dataset to teach machine
learning. Journal of Information Studies & Technology 2023:2.12.
https://doi.org/10.5339/jist.2023.12
عجوة أو مجدول هي مجموعة بيانات متوازنة الصنفين لتصنيف التمور السعودية العضوية تتكون من ثلاث مجموعات فرعية:
الأولى: تحوي البيانات المجدولة ذات الخصائص اليدوية لتصنيف التمور العضوية (عجوة أو مجدول)،
والثانية: تجمع البيانات المجدولة ذات الخصائص المولدة أتوماتيكيّاً باستخدام التعلم العميق لتصنيف التمور العضوية (عجوة أو مجدول)،
والثالثة: تجمع صوراً لتمور العجوة والمجدول.
كما أنه أول بحث باللغة العربية يستخدم نماذج تعلم الآلة التقليدية والتعلم العميق لإنشاء نماذج ذات أداء عالٍ لتصنيف التمور السعودية العضوية بدون برمجة، مما يمكن الدارسين والباحثين والمطورين من تطوير تطبيقات تعلم آلة لتصنيف التمور السعودية بأشكال متنوعة في مواقع الإنترنت أو تطبيقات الجوالات أو في المتحكمات الدقيقة وتطبيقات إنترنت الأشياء وتعلم الآلات الصغيرة.
كرماً الاستشهاد بالبحث التالي عند استخدام مجموعة البيانات: Bati GF. Ajwa or Medjool: a binary balanced dataset to teach machine
learning. Journal of Information Studies & Technology 2023:2.12.
https://doi.org/10.5339/jist.2023.12
فيديوهات عربية تشرح مجموعة البيانات:
https://youtu.be/bPYHOYo4_Tw?feature=shared&t=1418
https://youtu.be/ADOuweANc5I?feature=shared&t=5775
https://youtu.be/PThKbc1kTSM?feature=shared&t=3253 |
Binaryy/multimodal-real-estate-search | 2023-09-16T07:50:18.000Z | [
"region:us"
] | Binaryy | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: 'Unnamed: 0'
dtype: int64
- name: Title
dtype: string
- name: Location
dtype: string
- name: Details
dtype: string
splits:
- name: train
num_bytes: 70812888.372
num_examples: 1041
download_size: 70215648
dataset_size: 70812888.372
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "multimodal-real-estate-search"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mtc/faithfulness_benchmark_sanity_check_gold_annotation | 2023-09-15T14:54:45.000Z | [
"region:us"
] | mtc | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: article_id
dtype: int64
- name: system
dtype: string
- name: sentence_ord
dtype: int64
- name: Comments
sequence: string
- name: pre_context
dtype: string
- name: post_context
dtype: string
- name: article_with_lead
dtype: string
- name: is_faithful
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 853849
num_examples: 318
download_size: 126490
dataset_size: 853849
---
# Dataset Card for "faithfulness_benchmark_sanity_check_gold_annotation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hellomyoh/2bytes-s30000-added-text | 2023-09-17T03:20:34.000Z | [
"region:us"
] | hellomyoh | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 58513942
num_examples: 30000
download_size: 29925305
dataset_size: 58513942
---
# Dataset Card for "2bytes-s30000-added-text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minh21/COVID-QA-unique-context-test-10-percent-validation-10-percent | 2023-09-17T18:29:42.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: is_impossible
dtype: bool
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 2050073
num_examples: 1615
- name: test
num_bytes: 260386
num_examples: 202
- name: validation
num_bytes: 261992
num_examples: 202
download_size: 0
dataset_size: 2572451
---
# Dataset Card for "COVID-QA-unique-context-test-10-percent-validation-10-percent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
VatsaDev/UnagamiData | 2023-09-18T00:57:06.000Z | [
"region:us"
] | VatsaDev | null | null | null | 1 | 12 | Entry not found |
raghavneon/test_123 | 2023-09-17T21:58:45.000Z | [
"region:us"
] | raghavneon | null | null | null | 0 | 12 | Entry not found |
Lancelot53/srbd1_v2_annotated_segmented | 2023-09-18T19:14:48.000Z | [
"region:us"
] | Lancelot53 | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: html
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1623614
num_examples: 2434
download_size: 525557
dataset_size: 1623614
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "srbd1_v2_annotated_segmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eswardivi/Tam_MSA | 2023-09-19T06:33:58.000Z | [
"region:us"
] | eswardivi | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
splits:
- name: train
num_bytes: 79205685.0
num_examples: 64
download_size: 78906043
dataset_size: 79205685.0
---
# Dataset Card for "Tam_MSA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dzotova/sapolsky_lecture_speaker | 2023-09-19T09:46:29.000Z | [
"region:us"
] | dzotova | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 115172.9502762431
num_examples: 144
- name: test
num_bytes: 29593.049723756907
num_examples: 37
download_size: 68985
dataset_size: 144766.0
---
# Dataset Card for "sapolsky_lecture_speaker"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vikp/clean_notebooks_labeled | 2023-09-19T16:01:42.000Z | [
"region:us"
] | vikp | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: code
dtype: string
- name: kind
dtype: string
- name: parsed_code
dtype: string
- name: quality_prob
dtype: float64
- name: learning_prob
dtype: float64
splits:
- name: train
num_bytes: 9995784915
num_examples: 648628
download_size: 4427950019
dataset_size: 9995784915
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "clean_notebooks_labeled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yicozy/dataset_study_dictionary | 2023-09-21T06:54:20.000Z | [
"region:us"
] | yicozy | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: study_ids
sequence: string
- name: corpus
dtype: string
splits:
- name: train
num_bytes: 1120563
num_examples: 7774
download_size: 118282
dataset_size: 1120563
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset_study_dictionary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sguo08/ops | 2023-09-21T01:09:30.000Z | [
"task_categories:table-question-answering",
"size_categories:100K<n<1M",
"language:zh",
"code",
"region:us"
] | sguo08 | null | null | null | 0 | 12 | ---
task_categories:
- table-question-answering
language:
- zh
tags:
- code
size_categories:
- 100K<n<1M
--- |
tuankg1028/nghiem_dataset_21_9 | 2023-09-21T06:57:43.000Z | [
"region:us"
] | tuankg1028 | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 724194
num_examples: 350
download_size: 214579
dataset_size: 724194
---
# Dataset Card for "nghiem_dataset_21_9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Vithika/DonutFineTuning | 2023-09-21T09:34:00.000Z | [
"region:us"
] | Vithika | null | null | null | 0 | 12 | Entry not found |
DopeorNope/20000sample_COT | 2023-09-21T11:57:31.000Z | [
"region:us"
] | DopeorNope | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: rationale
dtype: string
- name: task
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 23066106
num_examples: 21297
download_size: 9606299
dataset_size: 23066106
---
# Dataset Card for "20000sample_COT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hellomyoh/train_data_set_10001966-added-text | 2023-09-22T10:02:42.000Z | [
"region:us"
] | hellomyoh | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: num
dtype: int64
- name: english
dtype: string
- name: korean
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 497586414
num_examples: 1001966
download_size: 302932465
dataset_size: 497586414
---
# Dataset Card for "train_data_set_10001966-added-text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NewstaR/dolly-gpt | 2023-09-22T12:42:15.000Z | [
"region:us"
] | NewstaR | null | null | null | 0 | 12 | This dataset is intended solely for experimental purposes. We are exploring the capabilities of the GPT structure when applied to this dataset. The data will be used for fine-tuning the Falcon 1B model. Please note that the results generated from this dataset should be interpreted with caution, as they are part of an ongoing research project. |
Siyoun/plan_vic | 2023-09-22T16:06:46.000Z | [
"region:us"
] | Siyoun | null | null | null | 0 | 12 | Entry not found |
thatboyster/course_list | 2023-09-22T17:21:04.000Z | [
"region:us"
] | thatboyster | null | null | null | 0 | 12 | Entry not found |
Photolens/oasst1-en | 2023-10-02T13:46:38.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | Photolens | null | null | null | 2 | 12 | ---
configs:
- config_name: default
data_files:
- split: test_ift
path: data/test_ift-*
- split: train_ift
path: data/train_ift-*
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: text
dtype: string
splits:
- name: test_ift
num_bytes: 6809402
num_examples: 2124
- name: train_ift
num_bytes: 60632912
num_examples: 19111
download_size: 36886751
dataset_size: 67442314
license: apache-2.0
language:
- en
--- |
taldarim/ar-higher-merged | 2023-09-23T12:53:16.000Z | [
"region:us"
] | taldarim | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 374438
num_examples: 280
- name: test
num_bytes: 370272
num_examples: 236
download_size: 283162
dataset_size: 744710
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "ar-higher-merged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sugeun/legal | 2023-09-25T02:29:28.000Z | [
"region:us"
] | sugeun | null | null | null | 0 | 12 | Entry not found |
blaze1411/base60-sparrow | 2023-09-27T07:37:05.000Z | [
"license:apache-2.0",
"region:us"
] | blaze1411 | null | null | null | 0 | 12 | ---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 5135758.0
num_examples: 31
- name: test
num_bytes: 308036.0
num_examples: 2
- name: validation
num_bytes: 647642.0
num_examples: 4
download_size: 6093489
dataset_size: 6091436.0
---
|
napatswift/thbud-doc-ocr | 2023-09-25T08:44:32.000Z | [
"region:us"
] | napatswift | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: words
sequence: string
- name: norm_bboxes
sequence:
sequence: float64
- name: ner_tags
sequence: 'null'
- name: class
dtype:
class_label:
names:
'0': toc
'1': entry
'2': other
splits:
- name: train
num_bytes: 6887148
num_examples: 1078
download_size: 2658905
dataset_size: 6887148
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "thbud-doc-ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MattCoddity/docker_ps | 2023-09-25T15:51:16.000Z | [
"region:us"
] | MattCoddity | null | null | null | 0 | 12 | Entry not found |
joe-chiu/TinyChineseStories | 2023-09-25T23:19:08.000Z | [
"language:zh",
"region:us"
] | joe-chiu | null | null | null | 0 | 12 | ---
language:
- zh
---
This is a dataset of short Chiense stories generated from GPT3.5. It is inspired by Tiny Stories dataset, but instead of millions of rows, I only generated a few thousands stories. The dataset was created as a learning exercise for using GPT API to generate training data for a potential language model idea.
I created these stories by first using ChatGPT to generate a list of male and female character names, a list of genre and one sentence story themes and a list of story starters (similar to "Once upon a time"). Later, I use GPT3.5 chat completion API to generate short stories given the 3 constraints: genre and theme and sentence starter. And the stories were generated in the batch of 3. So every 3 stories would share the exact same parameters.
---
license: cc-by-4.0
--- |
mmnga/wikipedia-ja-20230720-1k | 2023-09-26T04:24:04.000Z | [
"region:us"
] | mmnga | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: curid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2746008.4742813315
num_examples: 1024
download_size: 1593280
dataset_size: 2746008.4742813315
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikipedia-ja-20230720-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tessiw/German_GuanacoDataset | 2023-09-26T12:52:18.000Z | [
"task_categories:conversational",
"language:de",
"region:us"
] | tessiw | null | null | null | 1 | 12 | ---
language:
- de
task_categories:
- conversational
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 77973314
num_examples: 139476
download_size: 40038214
dataset_size: 77973314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset is a subset of the [JosephusCheung/GuanacoDataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset/viewer/default/train?p=11736) dataset, where only german samples were selected as well as formated with the following template for the chat models:
```<s>[INST] User prompt [/INST] Model answer </s>```
|
TheAIchemist13/whisper-kannada-audio | 2023-09-27T10:12:59.000Z | [
"region:us"
] | TheAIchemist13 | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcriptions
dtype: string
splits:
- name: train
num_bytes: 4518573.0
num_examples: 108
download_size: 4455242
dataset_size: 4518573.0
---
# Dataset Card for "whisper-kannada-audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rookshanks/dart | 2023-09-28T02:35:11.000Z | [
"region:us"
] | rookshanks | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: context
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 15361709
num_examples: 62659
- name: validation
num_bytes: 1895789
num_examples: 6980
- name: test
num_bytes: 3429190
num_examples: 12552
download_size: 1145768
dataset_size: 20686688
---
# Dataset Card for "dart"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PericlesSavio/test1 | 2023-09-28T17:34:13.000Z | [
"region:us"
] | PericlesSavio | null | null | null | 0 | 12 | Entry not found |
Weni/Semantic-Search-V1-14K | 2023-09-28T18:33:56.000Z | [
"region:us"
] | Weni | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: int64
- name: produto
dtype: string
splits:
- name: train
num_bytes: 821874
num_examples: 14037
download_size: 421707
dataset_size: 821874
---
# Dataset Card for "Semantic-Search-V1-14K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
demicrat/SpeechLLMv1 | 2023-09-29T05:21:41.000Z | [
"region:us"
] | demicrat | null | null | null | 0 | 12 | |
AustinMcMike/steve_jobs | 2023-09-29T17:30:12.000Z | [
"license:apache-2.0",
"region:us"
] | AustinMcMike | null | null | null | 0 | 12 | ---
license: apache-2.0
---
Created from various interviews/quotes by Steve Jobs |
vickasa/toosiData | 2023-10-05T01:19:14.000Z | [
"license:llama2",
"region:us"
] | vickasa | null | null | null | 0 | 12 | ---
license: llama2
---
|
adamo1139/basic_economics_questions_ts_test_4 | 2023-09-29T22:20:22.000Z | [
"license:apache-2.0",
"region:us"
] | adamo1139 | null | null | null | 0 | 12 | ---
license: apache-2.0
---
|
akshatshah1103/retail-faq | 2023-10-01T03:08:10.000Z | [
"license:apache-2.0",
"region:us"
] | akshatshah1103 | null | null | null | 0 | 12 | ---
license: apache-2.0
---
|
tanvirsrbd1/dataset1_two_app | 2023-10-01T05:17:16.000Z | [
"region:us"
] | tanvirsrbd1 | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: id
dtype: string
- name: xml
dtype: string
- name: html
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1919575
num_examples: 68
download_size: 258813
dataset_size: 1919575
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset1_two_app"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sviluppo/test02 | 2023-10-03T07:46:26.000Z | [
"region:us"
] | Sviluppo | null | null | null | 0 | 12 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
DataStudio/TTS_Speaker_01 | 2023-10-03T04:03:18.000Z | [
"region:us"
] | DataStudio | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: content
dtype: string
splits:
- name: train
num_bytes: 1069341549.668
num_examples: 8518
download_size: 776772238
dataset_size: 1069341549.668
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "TTS_Speaker_01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vsarathy/nl-robotics-semantic-parsing-info_structure-30k-no-context | 2023-10-03T14:32:45.000Z | [
"region:us"
] | vsarathy | null | null | null | 0 | 12 | Entry not found |
FudanSELab/CodeGen4Libs | 2023-10-05T02:24:07.000Z | [
"size_categories:100K<n<1M",
"license:mit",
"code-generation",
"region:us"
] | FudanSELab | FudanSELab CodeGen4Libs Dataset | @inproceedings{ase2023codegen4libs,
author = {Mingwei Liu and Tianyong Yang and Yiling Lou and Xueying Du and Ying Wang and and Xin Peng},
title = {{CodeGen4Libs}: A Two-stage Approach for Library-oriented Code Generation},
booktitle = {38th {IEEE/ACM} International Conference on Automated Software Engineering,
{ASE} 2023, Kirchberg, Luxembourg, September 11-15, 2023},
pages = {0--0},
publisher = {{IEEE}},
year = {2023},
} | null | 2 | 12 | ---
license: mit
tags:
- code-generation
pretty_name: CodeGen4Libs Dataset
size_categories:
- 100K<n<1M
---
# Dataset Card for FudanSELab CodeGen4Libs Dataset
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/FudanSELab/codegen4libs)
- **Paper:** [CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation](https://mingwei-liu.github.io/publication/2023-08-18-ase-CodeGen4Libs)
### Dataset Summary
This dataset is used in the ASE2023 paper titled ["CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation"](https://mingwei-liu.github.io/publication/2023-08-18-ase-CodeGen4Libs).
### Languages
[More Information Needed]
## Dataset Structure
```python
from datasets import load_dataset
dataset = load_dataset("FudanSELab/CodeGen4Libs")
DatasetDict({
train: Dataset({
features: ['id', 'method', 'clean_method', 'doc', 'comment', 'method_name', 'extra', 'imports_info', 'libraries_info', 'input_str', 'input_ids', 'tokenized_input_str', 'input_token_length', 'labels', 'tokenized_labels_str', 'labels_token_length', 'retrieved_imports_info', 'retrieved_code', 'imports', 'cluster_imports_info', 'libraries', 'attention_mask'],
num_rows: 391811
})
validation: Dataset({
features: ['id', 'method', 'clean_method', 'doc', 'comment', 'method_name', 'extra', 'imports_info', 'libraries_info', 'input_str', 'input_ids', 'tokenized_input_str', 'input_token_length', 'labels', 'tokenized_labels_str', 'labels_token_length', 'retrieved_imports_info', 'retrieved_code', 'imports', 'cluster_imports_info', 'libraries', 'attention_mask'],
num_rows: 5967
})
test: Dataset({
features: ['id', 'method', 'clean_method', 'doc', 'comment', 'method_name', 'extra', 'imports_info', 'libraries_info', 'input_str', 'input_ids', 'tokenized_input_str', 'input_token_length', 'labels', 'tokenized_labels_str', 'labels_token_length', 'retrieved_imports_info', 'retrieved_code', 'imports', 'cluster_imports_info', 'libraries', 'attention_mask'],
num_rows: 6002
})
})
```
### Data Fields
The specific data fields for each tuple are delineated as follows:
- id: the unique identifier for each tuple.
- method: the original method-level code for each tuple.
- clean_method: the ground-truth method-level code for each task.
- doc: the document of method-level code for each tuple.
- comment: the natural language description for each tuple.
- method_name: the name of the method.
- extra: extra information on the code repository to which the method level code belongs.
- license: the license of code repository.
- path: the path of code repository.
- repo_name: the name of code repository.
- size: the size of code repository.
- imports_info: the import statements for each tuple.
- libraries_info: the libraries info for each tuple.
- input_str: the design of model input.
- input_ids: the ids of tokenized input.
- tokenized_input_str: the tokenized input.
- input_token_length: the length of the tokenized input.
- labels: the ids of tokenized output.
- tokenized_labels_str: the tokenized output.
- labels_token_length: the length of the the tokenized output.
- retrieved_imports_info: the retrieved import statements for each tuple.
- retrieved_code: the retrieved method-level code for each tuple.
- imports: the imported packages of each import statement.
- cluster_imports_info: cluster import information of code.
- libraries: libraries used by the code.
- attention_mask: attention mask for the input.
### Data Splits
The dataset is splited into a training set, a validation set, and a test set, with 391811, 5967, and 6002 data rows respectively.
## Additional Information
### Citation Information
```
@inproceedings{ase2023codegen4libs,
author = {Mingwei Liu and Tianyong Yang and Yiling Lou and Xueying Du and Ying Wang and and Xin Peng},
title = {{CodeGen4Libs}: A Two-stage Approach for Library-oriented Code Generation},
booktitle = {38th {IEEE/ACM} International Conference on Automated Software Engineering,
{ASE} 2023, Kirchberg, Luxembourg, September 11-15, 2023},
pages = {0--0},
publisher = {{IEEE}},
year = {2023},
}
``` |
Sathvik-24/engtohinglish | 2023-10-05T06:18:40.000Z | [
"region:us"
] | Sathvik-24 | null | null | null | 0 | 12 | |
trunks/graph_tt | 2023-10-05T06:33:17.000Z | [
"region:us"
] | trunks | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 131322.0
num_examples: 8
download_size: 99680
dataset_size: 131322.0
---
# Dataset Card for "graph_tt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adamo1139/PS_AD_Office365_03 | 2023-10-05T00:20:42.000Z | [
"region:us"
] | adamo1139 | null | null | null | 0 | 12 | Previous version with a subset of spicyboros 2.2 coding samples plus some a few other new PowerShell scripting samples. Some formatting fixes. |
Intuit-GenSRF/sexting-nsfw-adultconten | 2023-10-05T01:05:04.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 33518
num_examples: 538
download_size: 19162
dataset_size: 33518
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sexting-nsfw-adultconten"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/massive_5_lang_DA_tokenized | 2023-10-06T06:00:05.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 424287645
num_examples: 552890
download_size: 127805722
dataset_size: 424287645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_5_lang_DA_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Anujgr8/Test_bang_data | 2023-10-08T03:19:41.000Z | [
"license:mit",
"region:us"
] | Anujgr8 | null | null | null | 1 | 12 | ---
license: mit
---
|
Joragasy/CultureNuc_ft | 2023-10-10T07:22:03.000Z | [
"license:mit",
"region:us"
] | Joragasy | null | null | null | 0 | 12 | ---
license: mit
---
|
marcus2000/timelist_summary_dataset | 2023-10-06T13:10:36.000Z | [
"region:us"
] | marcus2000 | null | null | null | 0 | 12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Original
dtype: string
- name: Summary
dtype: string
splits:
- name: train
num_bytes: 352926.0853658537
num_examples: 278
- name: test
num_bytes: 63475.91463414634
num_examples: 50
download_size: 227279
dataset_size: 416402.0
---
# Dataset Card for "timelist_summary_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
llama2d/llama2d-unscramble-small | 2023-10-07T02:17:35.000Z | [
"region:us"
] | llama2d | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: input_ids
sequence: float32
- name: coords
sequence:
sequence: float32
- name: labels
sequence: float32
- name: attention_mask
sequence: float32
splits:
- name: train
num_bytes: 30080000
num_examples: 5000
download_size: 1614133
dataset_size: 30080000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2d-unscramble-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AryanNsc/Mainspacehubdata | 2023-10-08T16:42:43.000Z | [
"region:us"
] | AryanNsc | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10911
num_examples: 39
download_size: 8319
dataset_size: 10911
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Mainspacehubdata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mcorsa/swifterX-4k-clean | 2023-10-08T21:19:31.000Z | [
"license:apache-2.0",
"region:us"
] | mcorsa | null | null | null | 0 | 12 | ---
license: apache-2.0
---
|
surajbijjahalli/semantic_seg_ATL | 2023-10-08T23:43:04.000Z | [
"region:us"
] | surajbijjahalli | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 156066511.354
num_examples: 1407
download_size: 155003543
dataset_size: 156066511.354
---
# Dataset Card for "semantic_seg_ATL"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
salsarra/SQAC-Corrected | 2023-10-09T14:56:46.000Z | [
"region:us"
] | salsarra | null | null | null | 0 | 12 | Entry not found |
result-kand2-sdxl-wuerst-karlo/02dd1f44 | 2023-10-10T00:35:21.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 12 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 158
num_examples: 10
download_size: 1302
dataset_size: 158
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "02dd1f44"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atomic | 2022-11-18T18:56:37.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"common-sense-if-then-reasoning",
"region:us"
] | null | This dataset provides the template sentences and
relationships defined in the ATOMIC common sense dataset. There are
three splits - train, test, and dev.
From the authors.
Disclaimer/Content warning: the events in atomic have been
automatically extracted from blogs, stories and books written at
various times. The events might depict violent or problematic actions,
which we left in the corpus for the sake of learning the (probably
negative but still important) commonsense implications associated with
the events. We removed a small set of truly out-dated events, but
might have missed some so please email us (msap@cs.washington.edu) if
you have any concerns. | @article{Sap2019ATOMICAA,
title={ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning},
author={Maarten Sap and Ronan Le Bras and Emily Allaway and Chandra Bhagavatula and Nicholas Lourie and Hannah Rashkin and Brendan Roof and Noah A. Smith and Yejin Choi},
journal={ArXiv},
year={2019},
volume={abs/1811.00146}
} | null | 5 | 11 | ---
pretty_name: ATOMIC
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: atomic
tags:
- common-sense-if-then-reasoning
dataset_info:
features:
- name: event
dtype: string
- name: oEffect
sequence: string
- name: oReact
sequence: string
- name: oWant
sequence: string
- name: xAttr
sequence: string
- name: xEffect
sequence: string
- name: xIntent
sequence: string
- name: xNeed
sequence: string
- name: xReact
sequence: string
- name: xWant
sequence: string
- name: prefix
sequence: string
- name: split
dtype: string
config_name: atomic
splits:
- name: train
num_bytes: 32441878
num_examples: 202271
- name: test
num_bytes: 3995624
num_examples: 24856
- name: validation
num_bytes: 3629768
num_examples: 22620
download_size: 19083782
dataset_size: 40067270
---
# Dataset Card for An Atlas of Machine Commonsense for If-Then Reasoning - Atomic Common Sense Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
https://homes.cs.washington.edu/~msap/atomic/
- **Repository:**
https://homes.cs.washington.edu/~msap/atomic/
- **Paper:**
Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith & Yejin Choi (2019). ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. AAAI
### Dataset Summary
This dataset provides the template sentences and
relationships defined in the ATOMIC common sense dataset. There are
three splits - train, test, and dev.
From the authors.
Disclaimer/Content warning: the events in atomic have been
automatically extracted from blogs, stories and books written at
various times. The events might depict violent or problematic actions,
which we left in the corpus for the sake of learning the (probably
negative but still important) commonsense implications associated with
the events. We removed a small set of truly out-dated events, but
might have missed some so please email us (msap@cs.washington.edu) if
you have any concerns.
For more information, see: https://homes.cs.washington.edu/~msap/atomic/
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en
## Dataset Structure
### Data Instances
Here is one example from the atomic dataset:
``
{'event': "PersonX uses PersonX's ___ to obtain", 'oEffect': [], 'oReact': ['annoyed', 'angry', 'worried'], 'oWant': [], 'prefix': ['uses', 'obtain'], 'split': 'trn', 'xAttr': [], 'xEffect': [], 'xIntent': ['to have an advantage', 'to fulfill a desire', 'to get out of trouble'], 'xNeed': [], 'xReact': ['pleased', 'smug', 'excited'], 'xWant': []}
``
### Data Fields
Notes from the authors:
* event: just a string representation of the event.
* oEffect,oReact,oWant,xAttr,xEffect,xIntent,xNeed,xReact,xWant: annotations for each of the dimensions, stored in a json-dumped string.
Note: "none" means the worker explicitly responded with the empty response, whereas [] means the worker did not annotate this dimension.
* prefix: json-dumped string that represents the prefix of content words (used to make a better trn/dev/tst split).
* split: string rep of which split the event belongs to.
### Data Splits
The atomic dataset has three splits: test, train and dev of the form:
## Dataset Creation
### Curation Rationale
This dataset was gathered and created over to assist in common sense reasoning.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created by the University of Washington using crowd sourced data
#### Who are the source language producers?
The Atomic authors and crowd source.
### Annotations
#### Annotation process
Human annotations directed by forms.
#### Who are the annotators?
Human annotations.
### Personal and Sensitive Information
Unkown, but likely none.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to help machines understand common sense.
### Discussion of Biases
Since the data is human annotators, there is likely to be baised. From the authors:
Disclaimer/Content warning: the events in atomic have been automatically extracted from blogs, stories and books written at various times. The events might depict violent or problematic actions, which we left in the corpus for the sake of learning the (probably negative but still important) commonsense implications associated with the events. We removed a small set of truly out-dated events, but might have missed some so please email us (msap@cs.washington.edu) if you have any concerns.
### Other Known Limitations
While there are many relationships, the data is quite sparse. Also, each item of the dataset could be expanded into multiple sentences along the vsrious dimensions, oEffect, oRect, etc.
For example, given event: "PersonX uses PersonX's ___ to obtain" and dimension oReact: "annoyed", this could be transformed into an entry:
"PersonX uses PersonX's ___ to obtain => PersonY is annoyed"
## Additional Information
### Dataset Curators
The authors of Aotmic at The University of Washington
### Licensing Information
The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/
### Citation Information
@article{Sap2019ATOMICAA,
title={ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning},
author={Maarten Sap and Ronan Le Bras and Emily Allaway and Chandra Bhagavatula and Nicholas Lourie and Hannah Rashkin and Brendan Roof and Noah A. Smith and Yejin Choi},
journal={ArXiv},
year={2019},
volume={abs/1811.00146}
}
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. |
blbooks | 2022-11-03T16:31:29.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:other",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"sou... | null | A dataset comprising of text created by OCR from the 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900.
The books cover a wide range of subject areas including philosophy, history, poetry and literature. | @misc{BritishLibraryBooks2021,
author = {British Library Labs},
title = {Digitised Books. c. 1510 - c. 1900. JSONL (OCR derived text + metadata)},
year = {2021},
publisher = {British Library},
howpublished={https://doi.org/10.23636/r7w6-zy15} | null | 6 | 11 | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language:
- de
- en
- es
- fr
- it
- nl
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: British Library Books
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- other
task_ids:
- language-modeling
- masked-language-modeling
tags:
- digital-humanities-research
dataset_info:
- config_name: all
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30394267732
num_examples: 14011953
download_size: 10486035662
dataset_size: 30394267732
- config_name: 1800s
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30020434670
num_examples: 13781747
download_size: 10348577602
dataset_size: 30020434670
- config_name: 1700s
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 266382657
num_examples: 178224
download_size: 95137895
dataset_size: 266382657
- config_name: '1510_1699'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 107667469
num_examples: 51982
download_size: 42320165
dataset_size: 107667469
- config_name: '1500_1899'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30452067039
num_examples: 14011953
download_size: 10486035662
dataset_size: 30452067039
- config_name: '1800_1899'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30077284377
num_examples: 13781747
download_size: 10348577602
dataset_size: 30077284377
- config_name: '1700_1799'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 267117831
num_examples: 178224
download_size: 95137895
dataset_size: 267117831
---
# Dataset Card for British Library Books
## Table of Contents
- [Dataset Card for British Library Books](#dataset-card-for-British-Library-Books)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Language model training](#language-model-training)
- [Supervised tasks](#supervised-tasks)
- [Languages](#languages)
- [Language change](#language-change)
- [Optical Character Recognition](#optical-character-recognition)
- [OCR word confidence](#ocr-word-confidence)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Date normalization](#date-normalization)
- [Metadata included](#metadata-included)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Colonialism](#colonialism)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.bl.uk/collection-guides/digitised-printed-books
- **Repository:** https://doi.org/10.21250/db14
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** labs@bl.uk
### Dataset Summary
This dataset consists of books digitised by the British Library in partnership with Microsoft. The dataset includes ~25 million pages of out of copyright texts. The majority of the texts were published in the 18th and 19th Century, but the collection also consists of a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas, including geography, philosophy, history, poetry and literature and are published in various languages.
While the books are predominately from the 18th and 19th Centuries, there are fewer books from earlier periods. The number of pages in the corpus by decade:
| | page count |
| ---- | ---------- |
| 1510 | 94 |
| 1520 | 32 |
| 1540 | 184 |
| 1550 | 16 |
| 1580 | 276 |
| 1590 | 540 |
| 1600 | 1117 |
| 1610 | 1132 |
| 1620 | 1856 |
| 1630 | 9274 |
| 1640 | 4232 |
| 1650 | 2944 |
| 1660 | 5858 |
| 1670 | 11415 |
| 1680 | 8348 |
| 1690 | 13756 |
| 1700 | 10160 |
| 1710 | 9556 |
| 1720 | 10314 |
| 1730 | 13282 |
| 1740 | 10778 |
| 1750 | 12001 |
| 1760 | 21415 |
| 1770 | 28490 |
| 1780 | 32676 |
| 1790 | 50014 |
| 1800 | 307806 |
| 1810 | 478008 |
| 1820 | 589419 |
| 1830 | 681212 |
| 1840 | 1113473 |
| 1850 | 1726108 |
| 1860 | 1725407 |
| 1870 | 2069089 |
| 1880 | 2585159 |
| 1890 | 3365031 |
[More Information Needed]
### Supported Tasks and Leaderboards
This collection has been previously used across various digital history and humanities projects since being published.
The dataset consists of text and a range of metadata associated with this text. This metadata includes:
- date of publication
- place of publication
- country of publication
- language
- OCR quality
- physical description of the original physical item
#### Language model training
As a relatively large dataset, `blbooks` provides a source dataset for training language models. The presence of this metadata also offers interesting opportunities to use this dataset as a source for training language models based on:
- specific time-periods
- specific languages
- certain OCR quality thresholds
The above is not an exhaustive list but offer some suggestions of how the dataset can be used to explore topics such as the impact of OCR quality on language models, the ‘transferability’ of language models across time or the impact of training multilingual language models on historical languages.
#### Supervised tasks
Whilst this dataset does not have annotations for a specific NLP task, such as Named Entity Recognition, it does include a wide variety of metadata. This metadata has the potential to be used for training and/or evaluating a variety of supervised tasks predicting this metadata.
### Languages
This dataset consists of books published in several languages. The breakdown of the languages included (at the page level) is:
| Language | Pages |
| --------------------- | -------- |
| English | 10039463 |
| French | 1442929 |
| German | 1172793 |
| Spanish | 286778 |
| Italian | 214255 |
| Dutch | 204759 |
| Russian | 193347 |
| Danish | 93366 |
| Hungarian | 88094 |
| Swedish | 76225 |
| Polish | 58901 |
| Greek, Modern (1453-) | 26104 |
| Latin | 25611 |
| Portuguese | 25410 |
| Czech | 20160 |
| Bulgarian | 7891 |
| Finnish | 5677 |
| Irish | 2743 |
| Serbian | 1975 |
| Romanian | 1544 |
| Norwegian Nynorsk | 1398 |
| Croatian | 1306 |
| Norwegian | 1227 |
| Icelandic | 902 |
| Slovak | 840 |
| Lithuanian | 714 |
| Welsh | 580 |
| Slovenian | 545 |
| Indonesian | 418 |
| Cornish | 223 |
This breakdown was derived from the first language in the associated metadata field. Some books include multiple languages. Some of the languages codes for this data were also derived using computational methods. Therefore, the language fields in the dataset should be treated with some caution (discussed in more detail below).
#### Language change
The publication dates of books in the data cover a broad period of time (1500-1900). For languages in the dataset with broad temporal coverage, significant [language change](https://en.wikipedia.org/wiki/Language_change) might be found. The ability to study this change by taking reasonably large samples of languages covering different time periods is one of the opportunities offered by this dataset. The fact that the text in this dataset was produced via Optical Character Recognition (OCR) causes some challenges for this type of research (see below).
#### Optical Character Recognition
The digitised books in this collection were transformed into machine-readable text using Optical Character Recognition (OCR) software. The text produced via OCR software will usually include some errors. These errors include; mistakes at the character level; for example, an `i` is mistaken for an `l`, at the word level or across significant passages of text.
The books in this dataset can pose some additional challenges for OCR software. OCR errors can stem from:
- the quality of the original printing: printing technology was a developing technology during the time period covered by this corpus; some of the original book text will include misprints, blurred or faded ink that is hard to read
- damage to the page: some of the books will have become damaged over time, this can obscure all or parts of the text on a page
- poor quality scans: scanning books can be challenging; for example, if the book has tight bindings, it can be hard to capture text that has fallen into the [gutter](https://www.abaa.org/glossary/entry/gutter) of the book.
- the language used in the books may differ from the languages OCR software is predominantly trained to recognise.
##### OCR word confidence
Many OCR engines produce some form of confidence score alongside the predicted text. These confidence scores are usually at the character or word level. The word confidence score was given for each word in the original ALTO XML versions of the text in this dataset in this dataset. The OCR confidence scores should be treated with some scepticism. For historical text or in a lower resource language, for example, a low confidence score may be more likely for words not included in a modern dictionary but may be accurate transcriptions of the original text. With that said, the confidence scores do give some sense of the OCR quality.
An example of text with a high (over 90% mean word confidence score):
```
8 direction to the Conduit, round which is a wide open space, and a good broad pavement called the Parade. It commands a pleasant peep of the slopes and terrace throughout its entire length. The street continuing from the Conduit, in the same general direction, was known anciently as Lodborne Lane, and is now named South Street. From the Conduit two other streets, at right angles to these, are Long Street, leading Eastwards, and Half-Moon Street (formerly Lodborne), leading to Westbury, Trendle Street, and the Horsecastles Road.
```
An example of text with a score below 40%:
```
Hannover. Schrift und Druck von Fr. CultniTmn,',
"LeMNs'utluirui.",
'ü 8u«llim» M^äalßwi 01de!lop 1<M.',
'p^dnalmw vom Xr^u/e, lpiti>»**Kmm lie« !»^2!M kleine lii!<! (,«>* ttünee!<»e^ v»n tndzt Lievclum, 1872,
```
The quality of OCR - as measured by mean OCR confidence for a page - across the dataset correlates with other features. A groupby of publication decade and mean word confidence:
| decade | mean_wc_ocr |
| ------ | ----------- |
| 1510 | 0.499151 |
| 1520 | 0.544818 |
| 1540 | 0.511589 |
| 1550 | 0.4505 |
| 1580 | 0.321858 |
| 1590 | 0.461282 |
| 1600 | 0.467318 |
| 1610 | 0.495895 |
| 1620 | 0.501257 |
| 1630 | 0.49766 |
| 1640 | 0.512095 |
| 1650 | 0.528534 |
| 1660 | 0.521014 |
| 1670 | 0.592575 |
| 1680 | 0.583901 |
| 1690 | 0.567202 |
| 1700 | 0.575175 |
| 1710 | 0.61436 |
| 1720 | 0.627725 |
| 1730 | 0.658534 |
| 1740 | 0.64214 |
| 1750 | 0.657357 |
| 1760 | 0.6389 |
| 1770 | 0.651883 |
| 1780 | 0.632326 |
| 1790 | 0.664279 |
| 1800 | 0.682338 |
| 1810 | 0.708915 |
| 1820 | 0.730015 |
| 1830 | 0.730973 |
| 1840 | 0.713886 |
| 1850 | 0.697106 |
| 1860 | 0.696701 |
| 1870 | 0.717233 |
| 1880 | 0.733331 |
| 1890 | 0.762364 |
As might be expected, the earlier periods have lower mean word confidence scores. Again, all of this should be treated with some scepticism, especially as the size of the data grows over time.
As with time, the mean word confidence of the OCR software varies across languages:
| Language_1 | mean_wc_ocr |
| --------------------- | ----------- |
| Croatian | 0.755565 |
| Welsh | 0.7528 |
| Norwegian Nynorsk | 0.751648 |
| Slovenian | 0.746007 |
| French | 0.740772 |
| Finnish | 0.738032 |
| Czech | 0.737849 |
| Hungarian | 0.736076 |
| Dutch | 0.734977 |
| Cornish | 0.733682 |
| Danish | 0.733106 |
| English | 0.733037 |
| Irish | 0.732658 |
| Portuguese | 0.727746 |
| Spanish | 0.725111 |
| Icelandic | 0.724427 |
| Italian | 0.715839 |
| Swedish | 0.715633 |
| Polish | 0.715133 |
| Lithuanian | 0.700003 |
| Bulgarian | 0.694657 |
| Romanian | 0.692957 |
| Latin | 0.689022 |
| Russian | 0.685847 |
| Serbian | 0.674329 |
| Slovak | 0.66739 |
| Greek, Modern (1453-) | 0.632195 |
| German | 0.631457 |
| Indonesian | 0.6155 |
| Norwegian | 0.597987 |
Again, these numbers should be treated sceptically since some languages appear very infrequently. For example, the above table suggests the mean word confidence for Welsh is relatively high. However, there isn’t much Welsh in the dataset. Therefore, it is unlikely that this data will be particularly useful for training (historic) Welsh language models.
[More Information Needed]
## Dataset Structure
The dataset has a number of configurations relating to the different dates of publication in the underlying data:
- `1500_1899`: this configuration covers all years
- `1800_1899`: this configuration covers the years between 1800 and 1899
- `1700_1799`: this configuration covers the years between 1700 and 1799
- `1510_1699`: this configuration covers the years between 1510 and 1699
### Configuration option
All of the configurations have an optional keyword argument `skip_empty_pages` which is set to `True` by default. The underlying dataset includes some pages where there is no text. This could either be because the underlying book page didn't have any text or the OCR software failed to detect this text.
For many uses of this dataset it doesn't make sense to include empty pages so these are skipped by default. However, for some uses you may prefer to retain a representation of the data that includes these empty pages. Passing `skip_empty_pages=False` when loading the dataset will enable this option.
### Data Instances
An example data instance:
```python
{'Country of publication 1': 'England',
'Language_1': 'English',
'Language_2': None,
'Language_3': None,
'Language_4': None,
'Physical description': None,
'Publisher': None,
'all Countries of publication': 'England',
'all names': 'Settle, Elkanah [person]',
'date': 1689,
'empty_pg': True,
'mean_wc_ocr': 0.0,
'multi_language': False,
'name': 'Settle, Elkanah',
'pg': 1,
'place': 'London',
'raw_date': '1689',
'record_id': '001876770',
'std_wc_ocr': 0.0,
'text': None,
‘title’: ‘The Female Prelate: being the history and the life and death of Pope Joan. A tragedy [in five acts and in verse] . Written by a Person of Quality [i.e. Elkanah Settle]’}
```
Each instance in the dataset represents a single page from an original digitised book.
### Data Fields
Included in this dataset are:
| Field | Data Type | Description |
| ---------------------------- | --------- | ------------------------------------------------------------------------------------------------------------- |
| record_id | string | British Library ID for the item |
| date | int | parsed/normalised year for the item. i.e. 1850 |
| raw_date | string | the original raw date for an item i.e. 1850- |
| title | string | title of the book |
| place | string | Place of publication, i.e. London |
| empty_pg | bool | whether page contains text |
| text | string | OCR generated text for a page |
| pg | int | page in original book the instance refers to |
| mean_wc_ocr | float | mean word confidence values for the page |
| std_wc_ocr | float | standard deviation of the word confidence values for the page |
| name | string | name associated with the item (usually author) |
| all names | string | all names associated with a publication |
| Publisher | string | publisher of the book |
| Country of publication 1 | string | first country associated with publication |
| all Countries of publication | string | all countries associated with a publication |
| Physical description | string | physical description of the item (size). This requires some normalisation before use and isn’t always present |
| Language_1 | string | first language associated with the book, this is usually present |
| Language_2 | string | |
| Language_3 | string | |
| Language_4 | string | |
| multi_language | bool | |
Some of these fields are not populated a large proportion of the time. You can get some sense of this from this [Pandas Profiling](https://github.com/pandas-profiling/pandas-profiling) [report](https://davanstrien.github.io/BL-datasets-pandas-profile-reports/pandas_profile_report_MS_digitised_books_2021-01-09.html)
The majority of these fields relate to metadata about the books. Most of these fields were created by staff working for the British Library. The notable exception is the “Languages” fields that have sometimes been determined using computational methods. This work is reported in more detail in [Automated Language Identification of Bibliographic Resources](https://doi.org/10.1080/01639374.2019.1700201). It is important to note that metadata is neither perfect nor static. The metadata associated with this book was generated based on export from the British Library catalogue in 2021.
[More Information Needed]
### Data Splits
This dataset contains a single split `train`.
## Dataset Creation
**Note** this section is a work in progress.
### Curation Rationale
The books in this collection were digitised as part of a project partnership between the British Library and Microsoft. [Mass digitisation](https://en.wikipedia.org/wiki/Category:Mass_digitization), i.e. projects intending to quickly digitise large volumes of materials shape the selection of materials to include in several ways. Some considerations which are often involved in the decision of whether to include items for digitisation include (but are not limited to):
- copyright status
- preservation needs
- the size of an item, very large and very small items are often hard to digitise quickly
These criteria can have knock-on effects on the makeup of a collection. For example, systematically excluding large books may result in some types of book content not being digitised. Large volumes are likely to be correlated to content to at least some extent, so excluding them from digitisation will mean that material is underrepresented. Similarly, copyright status is often (but not only) determined by publication date. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date.
All of the above is largely to make clear that this collection was not curated to create a representative sample of the British Library’s holdings. Some material will be over-represented, and others under-represented. Similarly, the collection should not be considered a representative sample of what was published across the period covered by the dataset (nor that the relative proportions of the data for each time period represent a proportional sample of publications from that period). Finally, and this probably does not need stating, the language included in the text should not be considered representative of either written or spoken language(s) from that time period.
[More Information Needed]
### Source Data
The source data (physical items) includes a variety of resources (predominantly monographs) held by the [British Library](bl.uk/](https://bl.uk/). The British Library is a [Legal Deposit](https://www.bl.uk/legal-deposit/about-legal-deposit) library. “Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It’s existed in English law since 1662.” [source](https://www.bl.uk/legal-deposit/about-legal-deposit).
The source data for this version of the data is derived from the original ALTO XML files and a recent metadata export #TODO add links
[More Information Needed]
#### Initial Data Collection and Normalization
This version of the dataset was created using the original ALTO XML files and, where a match was found, updating the metadata associated with that item with more recent metadata using an export from the British Library catalogue. The process of creating this new dataset is documented here #TODO add link.
There are a few decisions made in the above processing steps worth highlighting in particular:
##### Date normalization
The metadata around date of publication for an item is not always exact. It often is represented as a date range e.g. `1850-1860`. The `date` field above takes steps to normalise this date to a single integer value. In most cases, this is taking the mean of the values associated with the item. The `raw_date` field includes the unprocessed date string.
##### Metadata included
The metadata associated with each item includes most of the fields available via the ALTO XML. However, the data doesn’t include some metadata fields from the metadata export file. The reason fields were excluded because they are frequently not populated. A cut off of 50% was chosen, i.e. values from the metadata which are missing above 50% of the time were not included. This is slightly arbitrary, but since the aim of this version of the data was to support computational research using the collection it was felt that these fields with frequent missing values would be less valuable.
#### Who are the source language producers?
[More Information Needed]
### Annotations
This dataset does not include annotations as usually understood in the context of NLP. The data does include metadata associated with the books.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
There a range of considerations around using the data. These include the representativeness of the dataset, the OCR quality and the language used. Depending on your use case, these may be more or less important. For example, the impact of OCR quality on downstream tasks will depend on the target task. It may also be possible to mitigate this negative impact from OCR through tokenizer choice, Language Model training objectives, oversampling high-quality OCR, etc.
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
The text in this collection is derived from historical text. As a result, the text will reflect this time period's social beliefs and attitudes. The books include both fiction and non-fiction books.
Examples of book titles that appear in the data (these are randomly sampled from all titles):
- ‘Rhymes and Dreams, Legends of Pendle Forest, and other poems’,
- “Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General’s Department, Horse Guards, War Office, etc”,
- ‘The fan. A poem’,
- ‘Grif; a story of Australian Life’,
- ‘Calypso; a masque: in three acts, etc’,
- ‘Tales Uncle told [With illustrative woodcuts.]’,
- 'Questings',
- 'Home Life on an Ostrich Farm. With ... illustrations’,
- ‘Bulgarya i Bulgarowie’,
- 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc',
- ‘The Corsair, a tale’,
‘Poems ... With notes [With a portrait.]’,
- ‘Report of the Librarian for the year 1898 (1899, 1901, 1909)’,
- “The World of Thought. A novel. By the author of ‘Before I began to speak.’”,
- 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc']
While using titles alone is insufficient to integrate bias in this collection, it gives some insight into the topics covered by books. Further, the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list.
#### Colonialism
Even in the above random sample of titles examples of colonial attitudes, we can see examples of titles. We can try and interrogate this further by searching for the name of places that were part of the British Empire when many of these books were published.
Searching for the string `India` in the titles and randomly sampling 10 titles returns:
- “Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the ‘Calcutta Weekly Englishman.’”,
- ‘A Winter in India and Malaysia among the Methodist Missions’,
- “The Tourist’s Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition”,
- ‘Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson’,
- "Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]”,
- ‘The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies’,
- “From Tonkin to India : by the sources of the Irawadi, January’ 95-January ’96”,
- ‘Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844’,
- ‘The Andaman Islands; their colonisation, etc. A correspondence addressed to the India Office’,
- ‘Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle’]
Searching form the string `Africa` in the titles and randomly sampling 10 titles returns:
- ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada',
- ‘To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]’,
- ‘Diamonds and Gold in South Africa ... With maps, etc’,
- ‘Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition’,
- ‘A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts’,
- ‘Side Lights on South Africa ... With a map, etc’,
- ‘My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc’,
- ‘Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations’,
- ‘[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder’,
- ‘Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc’]
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The books are licensed under the [CC Public Domain Mark 1.0](https://creativecommons.org/publicdomain/mark/1.0/) license.
### Citation Information
```bibtext
@misc{bBritishLibraryBooks2021,
author = {British Library Labs},
title = {Digitised Books. c. 1510 - c. 1900. JSONL (OCR derived text + metadata)},
year = {2021},
publisher = {British Library},
howpublished={https://doi.org/10.23636/r7w6-zy15}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset. |
code_x_glue_cc_code_completion_line | 2023-06-01T14:59:47.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:slot-filling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:code",
"license:c-uda",
"re... | null | Complete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.
We propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.
Line level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion. | @article{raychev2016probabilistic,
title={Probabilistic Model for Code with Decision Trees},
author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
journal={ACM SIGPLAN Notices},
pages={731--747},
year={2016},
publisher={ACM New York, NY, USA}
}
@inproceedings{allamanis2013mining,
title={Mining Source Code Repositories at Massive Scale using Language Modeling},
author={Allamanis, Miltiadis and Sutton, Charles},
booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
pages={207--216},
year={2013},
organization={IEEE}
} | null | 1 | 11 | ---
annotations_creators:
- found
language_creators:
- found
language:
- code
license:
- c-uda
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- slot-filling
pretty_name: CodeXGlueCcCodeCompletionLine
dataset_info:
- config_name: java
features:
- name: id
dtype: int32
- name: input
dtype: string
- name: gt
dtype: string
splits:
- name: train
num_bytes: 5454783
num_examples: 3000
download_size: 5523586
dataset_size: 5454783
- config_name: python
features:
- name: id
dtype: int32
- name: input
dtype: string
- name: gt
dtype: string
splits:
- name: train
num_bytes: 24021562
num_examples: 10000
download_size: 24266715
dataset_size: 24021562
config_names:
- go
- java
- javascript
- php
- python
- ruby
---
# Dataset Card for "code_x_glue_cc_code_completion_line"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line
### Dataset Summary
CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line
Complete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.
We propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.
Line level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.
### Supported Tasks and Leaderboards
- `slot-filling`: The dataset can be used to train a model for completing entire code lines.
### Languages
- Java **programming** language
- Python **programming** language
## Dataset Structure
### Data Instances
#### java
An example of 'train' looks as follows.
```
{
"gt": "",
"id": 0,
"input": "<s> package org . rubypeople . rdt . internal . ui . rubyeditor ; import java . util . Iterator ; import org . eclipse . core . resources . IMarker ; import org . eclipse . ui . texteditor . MarkerAnnotation ; import org . eclipse . ui . texteditor . MarkerUtilities ; import org . rubypeople . rdt . core . IRubyElement ; import org . rubypeople . rdt . core . IRubyModelMarker ; import org . rubypeople . rdt . core . IRubyScript ; import org . rubypeople . rdt . core . RubyCore ; public class RubyMarkerAnnotation extends MarkerAnnotation implements IRubyAnnotation { public static final String RUBY_MARKER_TYPE_PREFIX = \"\" ; public static final String ERROR_ANNOTATION_TYPE = \"\" ; public static final String WARNING_ANNOTATION_TYPE = \"\" ; public static final String INFO_ANNOTATION_TYPE = \"\" ; public static final String TASK_ANNOTATION_TYPE = \"\" ; private IRubyAnnotation fOverlay ; public RubyMarkerAnnotation ( IMarker marker ) { super ( marker ) ; } public String [ ] getArguments ( ) { return null ; } public int getId ( ) { IMarker marker = getMarker ( ) ; if ( marker == null || ! marker . exists ( ) ) return - 1 ; if ( isProblem ( ) ) return marker . getAttribute ( IRubyModelMarker . ID , - 1 ) ; return - 1 ; } public boolean isProblem ( ) { String type = getType ( ) ; return WARNING_ANNOTATION_TYPE . equals ( type ) || ERROR_ANNOTATION_TYPE . equals"
}
```
#### python
An example of 'train' looks as follows.
```
{
"gt": "",
"id": 0,
"input": "<s> from __future__ import absolute_import <EOL> import weakref <EOL> import operator <EOL> from . compat import threading , itertools_filterfalse <EOL> from . import py2k <EOL> import types <EOL> EMPTY_SET = frozenset ( ) <EOL> class KeyedTuple ( tuple ) : <EOL> def __new__ ( cls , vals , labels = None ) : <EOL> t = tuple . __new__ ( cls , vals ) <EOL> t . _labels = [ ] <EOL> if labels : <EOL> t . __dict__ . update ( zip ( labels , vals ) ) <EOL> t . _labels = labels <EOL> return t <EOL> def keys ( self ) : <EOL> return [ l for l in self . _labels if l is not None ] <EOL> @ property <EOL> def _fields ( self ) : <EOL> return tuple ( self . keys ( ) ) <EOL> def _asdict ( self ) : <EOL> return dict ( ( key , self . __dict__ [ key ] ) for key in self . keys ( ) ) <EOL> class ImmutableContainer ( object ) : <EOL> def _immutable ( self , * arg , ** kw ) : <EOL> raise TypeError ( \"\" % self . __class__ . __name__ ) <EOL> __delitem__ = __setitem__ = __setattr__ = _immutable <EOL> class immutabledict ( ImmutableContainer , dict ) : <EOL> clear = pop = popitem = setdefault = update = ImmutableContainer . _immutable <EOL> def __new__ ( cls , * args ) : <EOL> new = dict . __new__ ( cls ) <EOL> dict . __init__ ( new , * args ) <EOL> return new <EOL> def __init__ ( self , * args ) : <EOL> pass <EOL> def __reduce__ ( self ) : <EOL> return immutabledict , ( dict ( self ) , ) <EOL> def union ( self , d ) : <EOL> if not self : <EOL> return immutabledict ( d ) <EOL> else : <EOL> d2 = immutabledict ( self ) <EOL> dict . update ( d2 , d ) <EOL> return d2 <EOL> def __repr__ ( self ) : <EOL> return \"\" % dict . __repr__ ( self ) <EOL> class Properties ( object ) : <EOL> def __init__ ( self , data ) : <EOL> self . __dict__ [ '_data' ] = data <EOL> def __len__ ( self ) : <EOL> return len ( self . _data ) <EOL> def __iter__ ( self ) : <EOL> return iter ( list ( self . _data . values ( ) ) ) <EOL> def __add__ ( self , other ) : <EOL> return list ( self ) + list ( other ) <EOL> def __setitem__ ( self , key , object ) : <EOL> self . _data [ key ] = object <EOL> def __getitem__ ( self , key ) : <EOL> return self . _data [ key ] <EOL> def __delitem__ ( self , key ) : <EOL> del self . _data [ key ] <EOL> def __setattr__ ( self , key , object ) : <EOL> self . _data [ key ] = object <EOL> def __getstate__ ( self ) : <EOL> return { '_data' : self . __dict__ [ '_data' ] } <EOL> def __setstate__ ( self , state ) : <EOL> self . __dict__ [ '_data' ] = state [ '_data' ] <EOL> def __getattr__ ( self , key ) : <EOL> try : <EOL> return self . _data [ key ] <EOL> except KeyError : <EOL> raise AttributeError ( key ) <EOL> def __contains__ ( self , key ) : <EOL> return key in self . _data <EOL> def as_immutable ( self ) : <EOL> return ImmutableProperties ( self . _data ) <EOL> def update ( self , value ) : <EOL> self . _data . update ( value ) <EOL> def get ( self , key , default = None ) : <EOL> if key in self : <EOL> return self [ key ] <EOL> else : <EOL> return default <EOL> def keys ( self ) : <EOL> return list ( self . _data ) <EOL> def values ( self ) : <EOL> return list ( self . _data . values ( ) ) <EOL> def items ( self ) : <EOL> return list ( self . _data . items ( ) ) <EOL> def has_key ( self , key ) : <EOL> return key in self . _data <EOL> def clear ( self ) : <EOL> self . _data . clear ( ) <EOL> class OrderedProperties ( Properties ) : <EOL> def __init__ ( self ) : <EOL> Properties . __init__ ( self , OrderedDict ( ) ) <EOL> class ImmutableProperties ( ImmutableContainer , Properties ) : <EOL> class OrderedDict ( dict ) : <EOL> def __init__ ( self , ____sequence = None , ** kwargs ) : <EOL> self . _list = [ ] <EOL> if ____sequence is None : <EOL> if kwargs : <EOL> self . update ( ** kwargs ) <EOL> else : <EOL> self . update ( ____sequence , ** kwargs ) <EOL> def clear ( self ) : <EOL> self . _list = [ ] <EOL> dict . clear ( self ) <EOL> def copy ( self ) : <EOL> return self . __copy__ ( ) <EOL> def __copy__ ( self ) : <EOL> return OrderedDict ( self ) <EOL> def sort ( self , * arg , ** kw ) : <EOL> self . _list . sort ( * arg , ** kw ) <EOL> def update ( self , ____sequence = None , ** kwargs ) : <EOL> if ____sequence is not None : <EOL> if hasattr ( ____sequence , 'keys' ) : <EOL> for key in ____sequence . keys ( ) : <EOL> self . __setitem__ ( key , ____sequence [ key ] ) <EOL> else : <EOL> for key , value in ____sequence : <EOL> self [ key ] = value <EOL> if kwargs : <EOL> self . update ( kwargs ) <EOL> def setdefault ( self , key , value ) : <EOL> if key not in self : <EOL> self . __setitem__ ( key , value ) <EOL> return value <EOL> else : <EOL> return self . __getitem__ ( key ) <EOL> def __iter__ ( self ) : <EOL> return iter ( self . _list ) <EOL> def keys ( self ) : <EOL> return list ( self ) <EOL> def values ( self ) : <EOL> return [ self [ key ] for key in self . _list ] <EOL> def items ( self ) : <EOL> return [ ( key , self [ key ] ) for key in self . _list ] <EOL> if py2k : <EOL> def itervalues ( self ) : <EOL> return iter ( self . values ( ) ) <EOL> def iterkeys ( self ) : <EOL> return iter ( self ) <EOL> def iteritems ( self ) : <EOL> return iter ( self . items ( ) ) <EOL> def __setitem__ ( self , key , object ) : <EOL> if key not in self : <EOL> try : <EOL> self . _list . append ( key ) <EOL> except AttributeError : <EOL> self . _list = [ key ] <EOL> dict . __setitem__ ( self , key , object ) <EOL> def __delitem__ ( self , key ) : <EOL> dict . __delitem__ ( self , key ) <EOL> self . _list . remove ( key ) <EOL> def pop ( self , key , * default ) : <EOL> present = key in self <EOL> value = dict . pop ( self , key , * default ) <EOL> if present : <EOL> self . _list . remove ( key ) <EOL> return value <EOL> def popitem ( self ) : <EOL> item = dict . popitem ( self ) <EOL> self . _list . remove ( item [ 0 ] ) <EOL> return item <EOL> class OrderedSet ( set ) : <EOL> def __init__ ( self , d = None ) : <EOL> set . __init__ ( self ) <EOL> self . _list = [ ] <EOL> if d is not None : <EOL>"
}
```
### Data Fields
In the following each data field in go is explained for each config. The data fields are the same among all splits.
#### java, python
|field name| type | description |
|----------|------|----------------------------|
|id |int32 | Index of the sample |
|input |string| Input code string |
|gt |string| Code string to be predicted|
### Data Splits
| name |train|
|------|----:|
|java | 3000|
|python|10000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://github.com/microsoft, https://github.com/madlag
### Licensing Information
Computational Use of Data Agreement (C-UDA) License.
### Citation Information
```
@article{raychev2016probabilistic,
title={Probabilistic Model for Code with Decision Trees},
author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
journal={ACM SIGPLAN Notices},
pages={731--747},
year={2016},
publisher={ACM New York, NY, USA}
}
@inproceedings{allamanis2013mining,
title={Mining Source Code Repositories at Massive Scale using Language Modeling},
author={Allamanis, Miltiadis and Sutton, Charles},
booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
pages={207--216},
year={2013},
organization={IEEE}
}
```
### Contributions
Thanks to @madlag (and partly also @ncoop57) for adding this dataset. |
setimes | 2022-11-03T16:47:00.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:bg",
"language:bs",
"language:el",
"language:en",
"language:hr",
"language:mk",
"language:ro",
"languag... | null | SETimes – A Parallel Corpus of English and South-East European Languages
The corpus is based on the content published on the SETimes.com news portal. The news portal publishes “news and views from Southeast Europe” in ten languages: Bulgarian, Bosnian, Greek, English, Croatian, Macedonian, Romanian, Albanian and Serbian. This version of the corpus tries to solve the issues present in an older version of the corpus (published inside OPUS, described in the LREC 2010 paper by Francis M. Tyers and Murat Serdar Alperen). The following procedures were applied to resolve existing issues:
- stricter extraction process – no HTML residues present
- language identification on every non-English document – non-English online documents contain English material in case the article was not translated into that language
- resolving encoding issues in Croatian and Serbian – diacritics were partially lost due to encoding errors – text was rediacritized. | null | null | 0 | 11 | ---
pretty_name: SETimes – A Parallel Corpus of English and South-East European Languages
annotations_creators:
- found
language_creators:
- found
language:
- bg
- bs
- el
- en
- hr
- mk
- ro
- sq
- sr
- tr
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
dataset_info:
- config_name: bg-bs
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- bs
splits:
- name: train
num_bytes: 53816914
num_examples: 136009
download_size: 15406039
dataset_size: 53816914
- config_name: bg-el
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- el
splits:
- name: train
num_bytes: 115127431
num_examples: 212437
download_size: 28338218
dataset_size: 115127431
- config_name: bs-el
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bs
- el
splits:
- name: train
num_bytes: 57102373
num_examples: 137602
download_size: 16418250
dataset_size: 57102373
- config_name: bg-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- en
splits:
- name: train
num_bytes: 84421414
num_examples: 213160
download_size: 23509552
dataset_size: 84421414
- config_name: bs-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bs
- en
splits:
- name: train
num_bytes: 38167846
num_examples: 138387
download_size: 13477699
dataset_size: 38167846
- config_name: el-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- en
splits:
- name: train
num_bytes: 95011154
num_examples: 227168
download_size: 26637317
dataset_size: 95011154
- config_name: bg-hr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- hr
splits:
- name: train
num_bytes: 81774321
num_examples: 203465
download_size: 23165617
dataset_size: 81774321
- config_name: bs-hr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bs
- hr
splits:
- name: train
num_bytes: 38742816
num_examples: 138402
download_size: 13887348
dataset_size: 38742816
- config_name: el-hr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- hr
splits:
- name: train
num_bytes: 86642323
num_examples: 205008
download_size: 24662936
dataset_size: 86642323
- config_name: en-hr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- hr
splits:
- name: train
num_bytes: 57995502
num_examples: 205910
download_size: 20238640
dataset_size: 57995502
- config_name: bg-mk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- mk
splits:
- name: train
num_bytes: 110119623
num_examples: 207169
download_size: 26507432
dataset_size: 110119623
- config_name: bs-mk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bs
- mk
splits:
- name: train
num_bytes: 53972847
num_examples: 132779
download_size: 15267045
dataset_size: 53972847
- config_name: el-mk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- mk
splits:
- name: train
num_bytes: 115285053
num_examples: 207262
download_size: 28103006
dataset_size: 115285053
- config_name: en-mk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- mk
splits:
- name: train
num_bytes: 84735835
num_examples: 207777
download_size: 23316519
dataset_size: 84735835
- config_name: hr-mk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hr
- mk
splits:
- name: train
num_bytes: 82230621
num_examples: 198876
download_size: 23008021
dataset_size: 82230621
- config_name: bg-ro
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- ro
splits:
- name: train
num_bytes: 88058251
num_examples: 210842
download_size: 24592883
dataset_size: 88058251
- config_name: bs-ro
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bs
- ro
splits:
- name: train
num_bytes: 40894475
num_examples: 137365
download_size: 14272958
dataset_size: 40894475
- config_name: el-ro
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- ro
splits:
- name: train
num_bytes: 93167572
num_examples: 212359
download_size: 26164582
dataset_size: 93167572
- config_name: en-ro
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 63354811
num_examples: 213047
download_size: 21549096
dataset_size: 63354811
- config_name: hr-ro
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hr
- ro
splits:
- name: train
num_bytes: 61696975
num_examples: 203777
download_size: 21276645
dataset_size: 61696975
- config_name: mk-ro
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- mk
- ro
splits:
- name: train
num_bytes: 88449831
num_examples: 206168
download_size: 24409734
dataset_size: 88449831
- config_name: bg-sq
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- sq
splits:
- name: train
num_bytes: 87552911
num_examples: 211518
download_size: 24385772
dataset_size: 87552911
- config_name: bs-sq
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bs
- sq
splits:
- name: train
num_bytes: 40407355
num_examples: 137953
download_size: 14097831
dataset_size: 40407355
- config_name: el-sq
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- sq
splits:
- name: train
num_bytes: 98779961
num_examples: 226577
download_size: 27676986
dataset_size: 98779961
- config_name: en-sq
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- sq
splits:
- name: train
num_bytes: 66898163
num_examples: 227516
download_size: 22718906
dataset_size: 66898163
- config_name: hr-sq
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hr
- sq
splits:
- name: train
num_bytes: 61296829
num_examples: 205044
download_size: 21160637
dataset_size: 61296829
- config_name: mk-sq
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- mk
- sq
splits:
- name: train
num_bytes: 88053621
num_examples: 206601
download_size: 24241420
dataset_size: 88053621
- config_name: ro-sq
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ro
- sq
splits:
- name: train
num_bytes: 66845652
num_examples: 212320
download_size: 22515258
dataset_size: 66845652
- config_name: bg-sr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- sr
splits:
- name: train
num_bytes: 84698624
num_examples: 211172
download_size: 24007151
dataset_size: 84698624
- config_name: bs-sr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bs
- sr
splits:
- name: train
num_bytes: 38418660
num_examples: 135945
download_size: 13804698
dataset_size: 38418660
- config_name: el-sr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- sr
splits:
- name: train
num_bytes: 95035416
num_examples: 224311
download_size: 27108001
dataset_size: 95035416
- config_name: en-sr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- sr
splits:
- name: train
num_bytes: 63670296
num_examples: 225169
download_size: 22279147
dataset_size: 63670296
- config_name: hr-sr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hr
- sr
splits:
- name: train
num_bytes: 58560895
num_examples: 203989
download_size: 20791317
dataset_size: 58560895
- config_name: mk-sr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- mk
- sr
splits:
- name: train
num_bytes: 85333924
num_examples: 207295
download_size: 23878419
dataset_size: 85333924
- config_name: ro-sr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ro
- sr
splits:
- name: train
num_bytes: 63899703
num_examples: 210612
download_size: 22113558
dataset_size: 63899703
- config_name: sq-sr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- sq
- sr
splits:
- name: train
num_bytes: 67503584
num_examples: 224595
download_size: 23330640
dataset_size: 67503584
- config_name: bg-tr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- tr
splits:
- name: train
num_bytes: 86915746
num_examples: 206071
download_size: 23915651
dataset_size: 86915746
- config_name: bs-tr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bs
- tr
splits:
- name: train
num_bytes: 40280655
num_examples: 133958
download_size: 13819443
dataset_size: 40280655
- config_name: el-tr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- tr
splits:
- name: train
num_bytes: 91637159
num_examples: 207029
download_size: 25396713
dataset_size: 91637159
- config_name: en-tr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- tr
splits:
- name: train
num_bytes: 62858968
num_examples: 207678
download_size: 21049989
dataset_size: 62858968
- config_name: hr-tr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hr
- tr
splits:
- name: train
num_bytes: 61188085
num_examples: 199260
download_size: 20809412
dataset_size: 61188085
- config_name: mk-tr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- mk
- tr
splits:
- name: train
num_bytes: 87536870
num_examples: 203231
download_size: 23781873
dataset_size: 87536870
- config_name: ro-tr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ro
- tr
splits:
- name: train
num_bytes: 66726535
num_examples: 206104
download_size: 22165394
dataset_size: 66726535
- config_name: sq-tr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- sq
- tr
splits:
- name: train
num_bytes: 66371734
num_examples: 207107
download_size: 22014678
dataset_size: 66371734
- config_name: sr-tr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- sr
- tr
splits:
- name: train
num_bytes: 63371906
num_examples: 205993
download_size: 21602038
dataset_size: 63371906
---
# Dataset Card for SETimes – A Parallel Corpus of English and South-East European Languages
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.ffzg.hr/resources/corpora/setimes/
- **Repository:** None
- **Paper:** None
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
sogou_news | 2023-04-05T13:40:25.000Z | [
"arxiv:1509.01626",
"region:us"
] | null | The Sogou News dataset is a mixture of 2,909,551 news articles from the SogouCA and SogouCS news corpora, in 5 categories.
The number of training samples selected for each class is 90,000 and testing 12,000. Note that the Chinese characters have been converted to Pinyin.
classification labels of the news are determined by their domain names in the URL. For example, the news with
URL http://sports.sohu.com is categorized as a sport class. | @misc{zhang2015characterlevel,
title={Character-level Convolutional Networks for Text Classification},
author={Xiang Zhang and Junbo Zhao and Yann LeCun},
year={2015},
eprint={1509.01626},
archivePrefix={arXiv},
primaryClass={cs.LG}
} | null | 0 | 11 | ---
pretty_name: Sogou News
dataset_info:
features:
- name: title
dtype: string
- name: content
dtype: string
- name: label
dtype:
class_label:
names:
'0': sports
'1': finance
'2': entertainment
'3': automobile
'4': technology
splits:
- name: test
num_bytes: 168645860
num_examples: 60000
- name: train
num_bytes: 1257931136
num_examples: 450000
download_size: 384269937
dataset_size: 1426576996
---
# Dataset Card for "sogou_news"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** []()
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 384.27 MB
- **Size of the generated dataset:** 1.43 GB
- **Total amount of disk used:** 1.81 GB
### Dataset Summary
The Sogou News dataset is a mixture of 2,909,551 news articles from the SogouCA and SogouCS news corpora, in 5 categories.
The number of training samples selected for each class is 90,000 and testing 12,000. Note that the Chinese characters have been converted to Pinyin.
classification labels of the news are determined by their domain names in the URL. For example, the news with
URL http://sports.sohu.com is categorized as a sport class.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 384.27 MB
- **Size of the generated dataset:** 1.43 GB
- **Total amount of disk used:** 1.81 GB
An example of 'train' looks as follows.
```
{
"content": "du2 jia1 ti2 go1ng me3i ri4 ba4o jia4 \\n re4 xia4n :010-64438227\\n che1 xi2ng ba4o jia4 - cha2 xu2n jie2 guo3 \\n pi3n pa2i xi2ng ha4o jia4 ge2 ji1ng xia1o sha1ng ri4 qi1 zha1 ka4n ca1n shu4 pi2ng lu4n ",
"label": 3,
"title": " da3o ha2ng "
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `title`: a `string` feature.
- `content`: a `string` feature.
- `label`: a classification label, with possible values including `sports` (0), `finance` (1), `entertainment` (2), `automobile` (3), `technology` (4).
### Data Splits
| name |train |test |
|-------|-----:|----:|
|default|450000|60000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@misc{zhang2015characterlevel,
title={Character-level Convolutional Networks for Text Classification},
author={Xiang Zhang and Junbo Zhao and Yann LeCun},
year={2015},
eprint={1509.01626},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
spanish_billion_words | 2022-11-03T16:16:07.000Z | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"sour... | null | An unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl,
the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.
This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus. | @misc{cardellinoSBWCE,
author = {Cardellino, Cristian},
title = {Spanish {B}illion {W}ords {C}orpus and {E}mbeddings},
url = {https://crscardellino.github.io/SBWCE/},
month = {August},
year = {2019}
} | null | 8 | 11 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- es
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: sbwce
pretty_name: Spanish Billion Word Corpus and Embeddings
dataset_info:
features:
- name: text
dtype: string
config_name: corpus
splits:
- name: train
num_bytes: 8950895954
num_examples: 46925295
download_size: 2024166993
dataset_size: 8950895954
---
# Dataset Card for Spanish Billion Words
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Spanish Billion Words homepage](https://crscardellino.github.io/SBWCE/)
- **Point of Contact:** [Cristian Cardellino](mailto:ccardellino@unc.edu.ar) (Corpus Creator), [María Grandury](mailto:mariagrandury@gmail.com) (Corpus Submitter)
### Dataset Summary
The Spanish Billion Words Corpus is an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl,
the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.
This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus.
### Supported Tasks and Leaderboards
This dataset can be used for language modelling and for pretraining language models.
### Languages
The text in this dataset is in Spanish, BCP-47 code: 'es'.
## Dataset Structure
### Data Instances
Each example in this dataset is a sentence in Spanish:
```
{'text': 'Yo me coloqué en un asiento próximo a una ventana cogí un libro de una mesa y empecé a leer'}
```
### Data Fields
- `text`: a sentence in Spanish
### Data Splits
The dataset is not split.
## Dataset Creation
### Curation Rationale
The Spanish Billion Words Corpus was created to train word embeddings using the word2vect algorithm provided by the gensim package.
### Source Data
#### Initial Data Collection and Normalization
The corpus was created compiling the following resources:
- The Spanish portion of [SenSem]().
- The Spanish portion of the [Ancora Corpus](http://clic.ub.edu/corpus/en).
- [Tibidabo Treebank and IULA Spanish LSP Treebank](http://lod.iula.upf.edu/resources/metadata_TRL_Tibidabo_LSP_treebank_ES).
- The Spanish portion of the following [OPUS Project](http://opus.nlpl.eu/index.php) Corpora:
- The [books](http://opus.nlpl.eu/Books.php) aligned by [Andras Farkas](https://farkastranslations.com/).
- The [JRC-Acquis](http://opus.nlpl.eu/JRC-Acquis.php) collection of legislative text of the European Union.
- The [News Commentary](http://opus.nlpl.eu/News-Commentary.php) corpus.
- The [United Nations](http://opus.nlpl.eu/UN.php) documents compiled by [Alexandre Rafalovitch](https://www.outerthoughts.com/) and [Robert Dale](http://web.science.mq.edu.au/~rdale/).
- The Spanish portion of the [Europarl](http://statmt.org/europarl/) (European Parliament), compiled by [Philipp Koehn](https://homepages.inf.ed.ac.uk/pkoehn/).
- Dumps from the Spanish [Wikipedia](https://es.wikipedia.org/wiki/Wikipedia:Portada), [Wikisource](https://es.wikisource.org/wiki/Portada) and [Wikibooks](https://es.wikibooks.org/wiki/Portada) on date 2015-09-01, parsed with the Wikipedia Extractor.
All the annotated corpora (like Ancora, SenSem and Tibidabo) were untagged and
the parallel corpora (most coming from the OPUS Project) was preprocessed to obtain only the Spanish portions of it.
Once the whole corpus was unannotated, all non-alphanumeric characters were replaced with whitespaces,
all numbers with the token “DIGITO” and all the multiple whitespaces with only one whitespace.
The capitalization of the words remained unchanged.
#### Who are the source language producers?
The data was compiled and processed by Cristian Cardellino.
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The data was collected and processed by Cristian Cardellino.
### Licensing Information
The dataset is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license
[(CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@misc{cardellinoSBWCE,
author = {Cardellino, Cristian},
title = {Spanish {B}illion {W}ords {C}orpus and {E}mbeddings},
url = {https://crscardellino.github.io/SBWCE/},
month = {August},
year = {2019}
}
```
### Contributions
Thanks to [@mariagrandury](https://github.com/mariagrandury) for adding this dataset. |
rcds/swiss_judgment_prediction | 2023-06-14T11:59:24.000Z | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:de",
"language:fr",
"language:it",
"language:en",
"license:cc-by-sa-4.0",
"judgement-prediction",
... | rcds | Swiss-Judgment-Prediction is a multilingual, diachronic dataset of 85K Swiss Federal Supreme Court (FSCS) cases annotated with the respective binarized judgment outcome (approval/dismissal), posing a challenging text classification task. We also provide additional metadata, i.e., the publication year, the legal area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP. | @InProceedings{niklaus-etal-2021-swiss,
author = {Niklaus, Joel
and Chalkidis, Ilias
and Stürmer, Matthias},
title = {Swiss-Court-Predict: A Multilingual Legal Judgment Prediction Benchmark},
booktitle = {Proceedings of the 2021 Natural Legal Language Processing Workshop},
year = {2021},
location = {Punta Cana, Dominican Republic},
}
@misc{niklaus2022empirical,
title={An Empirical Study on Cross-X Transfer for Legal Judgment Prediction},
author={Joel Niklaus and Matthias Stürmer and Ilias Chalkidis},
year={2022},
eprint={2209.12325},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 11 | 11 | ---
pretty_name: Swiss-Judgment-Prediction
annotations_creators:
- found
language_creators:
- found
language:
- de
- fr
- it
- en
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
tags:
- judgement-prediction
dataset_info:
- config_name: de
features:
- name: id
dtype: int32
- name: year
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dismissal
'1': approval
- name: language
dtype: string
- name: region
dtype: string
- name: canton
dtype: string
- name: legal area
dtype: string
- name: source_language
dtype: string
splits:
- name: train
num_bytes: 104270719
num_examples: 35458
- name: validation
num_bytes: 12131878
num_examples: 4705
- name: test
num_bytes: 26056177
num_examples: 9725
download_size: 1000382331
dataset_size: 142458774
- config_name: fr
features:
- name: id
dtype: int32
- name: year
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dismissal
'1': approval
- name: language
dtype: string
- name: region
dtype: string
- name: canton
dtype: string
- name: legal area
dtype: string
- name: source_language
dtype: string
splits:
- name: train
num_bytes: 96807957
num_examples: 21179
- name: validation
num_bytes: 13031904
num_examples: 3095
- name: test
num_bytes: 33318359
num_examples: 6820
download_size: 1000382331
dataset_size: 143158220
- config_name: it
features:
- name: id
dtype: int32
- name: year
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dismissal
'1': approval
- name: language
dtype: string
- name: region
dtype: string
- name: canton
dtype: string
- name: legal area
dtype: string
- name: source_language
dtype: string
splits:
- name: train
num_bytes: 10773516
num_examples: 3072
- name: validation
num_bytes: 1045551
num_examples: 408
- name: test
num_bytes: 2474761
num_examples: 812
download_size: 1000382331
dataset_size: 14293828
- config_name: mt_de
features:
- name: id
dtype: int32
- name: year
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dismissal
'1': approval
- name: language
dtype: string
- name: region
dtype: string
- name: canton
dtype: string
- name: legal area
dtype: string
- name: source_language
dtype: string
splits:
- name: train
num_bytes: 106990696
num_examples: 24251
- name: validation
- name: test
download_size: 1000382331
dataset_size: 106990696
- config_name: mt_fr
features:
- name: id
dtype: int32
- name: year
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dismissal
'1': approval
- name: language
dtype: string
- name: region
dtype: string
- name: canton
dtype: string
- name: legal area
dtype: string
- name: source_language
dtype: string
splits:
- name: train
num_bytes: 117932134
num_examples: 38524
- name: validation
- name: test
download_size: 1000382331
dataset_size: 117932134
- config_name: mt_it
features:
- name: id
dtype: int32
- name: year
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dismissal
'1': approval
- name: language
dtype: string
- name: region
dtype: string
- name: canton
dtype: string
- name: legal area
dtype: string
- name: source_language
dtype: string
splits:
- name: train
num_bytes: 201749076
num_examples: 56631
- name: validation
- name: test
download_size: 1000382331
dataset_size: 201749076
- config_name: mt_en
features:
- name: id
dtype: int32
- name: year
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dismissal
'1': approval
- name: language
dtype: string
- name: region
dtype: string
- name: canton
dtype: string
- name: legal area
dtype: string
- name: source_language
dtype: string
splits:
- name: train
num_bytes: 196352783
num_examples: 59703
- name: validation
- name: test
download_size: 1000382331
dataset_size: 196352783
- config_name: all
features:
- name: id
dtype: int32
- name: year
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dismissal
'1': approval
- name: language
dtype: string
- name: region
dtype: string
- name: canton
dtype: string
- name: legal area
dtype: string
- name: source_language
dtype: string
splits:
- name: train
num_bytes: 211852192
num_examples: 59709
- name: validation
num_bytes: 26209333
num_examples: 8208
- name: test
num_bytes: 61849297
num_examples: 17357
download_size: 1000382331
dataset_size: 299910822
- config_name: all+mt
features:
- name: id
dtype: int32
- name: year
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dismissal
'1': approval
- name: language
dtype: string
- name: region
dtype: string
- name: canton
dtype: string
- name: legal area
dtype: string
- name: source_language
dtype: string
splits:
- name: train
num_bytes: 834876881
num_examples: 238818
- name: validation
num_bytes: 26209333
num_examples: 8208
- name: test
num_bytes: 61849297
num_examples: 17357
download_size: 1000382331
dataset_size: 922935511
---
# Dataset Card for "SwissJudgmentPrediction"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/JoelNiklaus/SwissCourtRulingCorpus
- **Repository:** https://github.com/JoelNiklaus/SwissCourtRulingCorpus
- **Paper:** https://arxiv.org/abs/2110.00806
- **Leaderboard:** N/A
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus@inf.unibe.ch)
### Dataset Summary
**Documents**
Swiss-Judgment-Prediction is a multilingual, diachronic dataset of 85K Swiss Federal Supreme Court (FSCS) cases annotated with the respective binarized judgment outcome (approval/dismissal), posing a challenging text classification task. We also provide additional metadata, i.e., the publication year, the legal area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP.
### Supported Tasks and Leaderboards
SwissJudgmentPrediction can be used for the legal judgment prediction task.
The dataset is not yet part of an established benchmark.
### Languages
Switzerland has four official languages with 3 languages (German, French and Italian) being represented in more than 1000 Swiss Federal Supreme court decisions. The decisions are written by the judges and clerks in the language of the proceedings.
## Dataset Structure
In version 2 we added machine translated data using [EasyNMT](https://github.com/UKPLab/EasyNMT) for all documents into German, French, Italian and English as an additional training set.
### Data Instances
**Multilingual use of the dataset**
When the dataset is used in a multilingual setting selecting the the 'all_languages' flag:
```python
from datasets import load_dataset
dataset = load_dataset('swiss_judgment_prediction', 'all_languages')
```
```
{
"id": 48757,
"year": 2015,
"facts": "Sachverhalt: A. X._ war bei der Krankenversicherung C._ taggeldversichert. Infolge einer Arbeitsunf\u00e4higkeit leistete ihm die C._ vom 30. Juni 2011 bis am 28. Juni 2013 Krankentaggelder, wobei die Leistungen bis am 30. September 2012 auf Grundlage einer Arbeitsunf\u00e4higkeit von 100% und danach basierend auf einer Arbeitsunf\u00e4higkeit von 55% erbracht wurden. Die Neueinsch\u00e4tzung der Arbeitsf\u00e4higkeit erfolgte anhand eines Gutachtens der D._ AG vom 27. August 2012, welches im Auftrag der C._ erstellt wurde. X._ machte daraufhin gegen\u00fcber der C._ geltend, er sei entgegen dem Gutachten auch nach dem 30. September 2012 zu 100% arbeitsunf\u00e4hig gewesen. Ferner verlangte er von der D._ AG zwecks externer \u00dcberpr\u00fcfung des Gutachtens die Herausgabe s\u00e4mtlicher diesbez\u00fcglicher Notizen, Auswertungen und Unterlagen. A._ (als Gesch\u00e4ftsf\u00fchrer der D._ AG) und B._ (als f\u00fcr das Gutachten medizinisch Verantwortliche) antworteten ihm, dass sie alle Unterlagen der C._ zugestellt h\u00e4tten und dass allf\u00e4llige Fragen zum Gutachten direkt der C._ zu stellen seien. X._ reichte am 2. Januar 2014 eine Strafanzeige gegen A._ und B._ ein. Er wirft diesen vor, ihn durch die Nichtherausgabe der Dokumente und durch Behinderung des IV-Verfahrens gen\u00f6tigt, Daten besch\u00e4digt bzw. vernichtet und ein falsches \u00e4rztliches Zeugnis ausgestellt zu haben. Zudem h\u00e4tten sie durch die Verz\u00f6gerung des IV-Verfahrens und insbesondere durch das falsche \u00e4rztliche Zeugnis sein Verm\u00f6gen arglistig gesch\u00e4digt. B. Die Staatsanwaltschaft des Kantons Bern, Region Oberland, nahm das Verfahren wegen N\u00f6tigung, Datenbesch\u00e4digung, falschem \u00e4rztlichem Zeugnis und arglistiger Verm\u00f6genssch\u00e4digung mit Verf\u00fcgung vom 10. November 2014 nicht an die Hand. Das Obergericht des Kantons Bern wies die von X._ dagegen erhobene Beschwerde am 27. April 2015 ab, soweit darauf einzutreten war. C. X._ beantragt mit Beschwerde in Strafsachen, der Beschluss vom 27. April 2015 sei aufzuheben und die Angelegenheit zur korrekten Ermittlung des Sachverhalts an die Staatsanwaltschaft zur\u00fcckzuweisen. Er stellt zudem den sinngem\u00e4ssen Antrag, das bundesgerichtliche Verfahren sei w\u00e4hrend der Dauer des konnexen Strafverfahrens gegen eine Teilgutachterin und des ebenfalls konnexen Zivil- oder Strafverfahrens gegen die C._ wegen Einsichtsverweigerung in das mutmasslich gef\u00e4lschte Originalgutachten zu sistieren. X._ ersucht um unentgeltliche Rechtspflege. ",
"labels": 0, # dismissal
"language": "de",
"region": "Espace Mittelland",
"canton": "be",
"legal area": "penal law"
}
```
**Monolingual use of the dataset**
When the dataset is used in a monolingual setting selecting the ISO language code for one of the 3 supported languages. For example:
```python
from datasets import load_dataset
dataset = load_dataset('swiss_judgment_prediction', 'de')
```
```
{
"id": 48757,
"year": 2015,
"facts": "Sachverhalt: A. X._ war bei der Krankenversicherung C._ taggeldversichert. Infolge einer Arbeitsunf\u00e4higkeit leistete ihm die C._ vom 30. Juni 2011 bis am 28. Juni 2013 Krankentaggelder, wobei die Leistungen bis am 30. September 2012 auf Grundlage einer Arbeitsunf\u00e4higkeit von 100% und danach basierend auf einer Arbeitsunf\u00e4higkeit von 55% erbracht wurden. Die Neueinsch\u00e4tzung der Arbeitsf\u00e4higkeit erfolgte anhand eines Gutachtens der D._ AG vom 27. August 2012, welches im Auftrag der C._ erstellt wurde. X._ machte daraufhin gegen\u00fcber der C._ geltend, er sei entgegen dem Gutachten auch nach dem 30. September 2012 zu 100% arbeitsunf\u00e4hig gewesen. Ferner verlangte er von der D._ AG zwecks externer \u00dcberpr\u00fcfung des Gutachtens die Herausgabe s\u00e4mtlicher diesbez\u00fcglicher Notizen, Auswertungen und Unterlagen. A._ (als Gesch\u00e4ftsf\u00fchrer der D._ AG) und B._ (als f\u00fcr das Gutachten medizinisch Verantwortliche) antworteten ihm, dass sie alle Unterlagen der C._ zugestellt h\u00e4tten und dass allf\u00e4llige Fragen zum Gutachten direkt der C._ zu stellen seien. X._ reichte am 2. Januar 2014 eine Strafanzeige gegen A._ und B._ ein. Er wirft diesen vor, ihn durch die Nichtherausgabe der Dokumente und durch Behinderung des IV-Verfahrens gen\u00f6tigt, Daten besch\u00e4digt bzw. vernichtet und ein falsches \u00e4rztliches Zeugnis ausgestellt zu haben. Zudem h\u00e4tten sie durch die Verz\u00f6gerung des IV-Verfahrens und insbesondere durch das falsche \u00e4rztliche Zeugnis sein Verm\u00f6gen arglistig gesch\u00e4digt. B. Die Staatsanwaltschaft des Kantons Bern, Region Oberland, nahm das Verfahren wegen N\u00f6tigung, Datenbesch\u00e4digung, falschem \u00e4rztlichem Zeugnis und arglistiger Verm\u00f6genssch\u00e4digung mit Verf\u00fcgung vom 10. November 2014 nicht an die Hand. Das Obergericht des Kantons Bern wies die von X._ dagegen erhobene Beschwerde am 27. April 2015 ab, soweit darauf einzutreten war. C. X._ beantragt mit Beschwerde in Strafsachen, der Beschluss vom 27. April 2015 sei aufzuheben und die Angelegenheit zur korrekten Ermittlung des Sachverhalts an die Staatsanwaltschaft zur\u00fcckzuweisen. Er stellt zudem den sinngem\u00e4ssen Antrag, das bundesgerichtliche Verfahren sei w\u00e4hrend der Dauer des konnexen Strafverfahrens gegen eine Teilgutachterin und des ebenfalls konnexen Zivil- oder Strafverfahrens gegen die C._ wegen Einsichtsverweigerung in das mutmasslich gef\u00e4lschte Originalgutachten zu sistieren. X._ ersucht um unentgeltliche Rechtspflege. ",
"labels": 0, # dismissal
"language": "de",
"region": "Espace Mittelland",
"canton": "be",
"legal area": "penal law"
}
```
### Data Fields
**Multilingual use of the dataset**
The following data fields are provided for documents (`train`, `validation`, `test`):
`id`: (**int**) a unique identifier of the for the document \
`year`: (**int**) the publication year \
`text`: (**str**) the facts of the case \
`label`: (**class label**) the judgment outcome: 0 (dismissal) or 1 (approval) \
`language`: (**str**) one of (de, fr, it) \
`region`: (**str**) the region of the lower court \
`canton`: (**str**) the canton of the lower court \
`legal area`: (**str**) the legal area of the case
**Monolingual use of the dataset**
The following data fields are provided for documents (`train`, `validation`, `test`):
`id`: (**int**) a unique identifier of the for the document \
`year`: (**int**) the publication year \
`text`: (**str**) the facts of the case \
`label`: (**class label**) the judgment outcome: 0 (dismissal) or 1 (approval) \
`language`: (**str**) one of (de, fr, it) \
`region`: (**str**) the region of the lower court \
`canton`: (**str**) the canton of the lower court \
`legal area`: (**str**) the legal area of the case
### Data Splits
| Language | Subset | Number of Documents (Training/Validation/Test) |
|------------|------------|------------------------------------------------|
| German | **de** | 35'452 / 4'705 / 9'725 |
| French | **fr** | 21'179 / 3'095 / 6'820 |
| Italian | **it** | 3'072 / 408 / 812 |
| All | **all** | 59'709 / 8'208 / 17'357 |
| MT German | **mt_de** | 24'251 / 0 / 0 |
| MT French | **mt_fr** | 38'524 / 0 / 0 |
| MT Italian | **mt_it** | 56'631 / 0 / 0 |
| MT All | **all+mt** | 238'818 / 8'208 / 17'357 |
## Dataset Creation
### Curation Rationale
The dataset was curated by Niklaus et al. (2021).
### Source Data
#### Initial Data Collection and Normalization
The original data are available at the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
Switzerland has four official languages with 3 languages (German, French and Italian) being represented in more than 1000 Swiss Federal Supreme court decisions. The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
The decisions have been annotated with the binarized judgment outcome using parsers and regular expressions.
#### Who are the annotators?
Joel Niklaus and Adrian Jörg annotated the binarized judgment outcomes.
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Niklaus et al. (2021)
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2000-2020
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
*Joel Niklaus, Ilias Chalkidis, and Matthias Stürmer.*
*Swiss-Judgment-Prediction: A Multilingual Legal Judgment Prediction Benchmark*
*Proceedings of the 2021 Natural Legal Language Processing Workshop. Punta Cana, Dominican Republic. 2021*
```
@InProceedings{niklaus-etal-2021-swiss,
author = {Niklaus, Joel
and Chalkidis, Ilias
and Stürmer, Matthias},
title = {Swiss-Judgment-Prediction: A Multilingual Legal Judgment Prediction Benchmark},
booktitle = {Proceedings of the 2021 Natural Legal Language Processing Workshop},
year = {2021},
location = {Punta Cana, Dominican Republic},
}
```
and the new citation
```
@misc{niklaus2022empirical,
title={An Empirical Study on Cross-X Transfer for Legal Judgment Prediction},
author={Joel Niklaus and Matthias Stürmer and Ilias Chalkidis},
year={2022},
eprint={2209.12325},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@joelniklaus](https://github.com/joelniklaus) for adding this dataset. |
thai_toxicity_tweet | 2023-01-25T14:45:38.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:th",
"license:cc-by-nc-3.0",
"region:us"
] | null | Thai Toxicity Tweet Corpus contains 3,300 tweets annotated by humans with guidelines including a 44-word dictionary.
The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus
analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains
toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing
target, and word sense ambiguity.
Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020.
By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`.
Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1). | @article{sirihattasak2019annotation,
title={Annotation and Classification of Toxicity for Thai Twitter},
author={Sirihattasak, Sugan and Komachi, Mamoru and Ishikawa, Hiroshi},
year={2019}
} | null | 2 | 11 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- th
license:
- cc-by-nc-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: ThaiToxicityTweet
dataset_info:
features:
- name: tweet_id
dtype: string
- name: tweet_text
dtype: string
- name: toxic_votes
dtype: int32
- name: nontoxic_votes
dtype: int32
- name: is_toxic
dtype:
class_label:
names:
'0': neg
'1': pos
config_name: thai_toxicity_tweet
splits:
- name: train
num_bytes: 637387
num_examples: 3300
download_size: 194740
dataset_size: 637387
---
# Dataset Card for `thai_toxicity_tweet`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/
- **Repository:** https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/
- **Paper:** https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf
- **Leaderboard:**
- **Point of Contact:** https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf
### Dataset Summary
Thai Toxicity Tweet Corpus contains 3,300 tweets (506 tweets with texts missing) annotated by humans with guidelines including a 44-word dictionary.
The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus
analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains
toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing
target, and word sense ambiguity.
Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`.
Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1).
### Supported Tasks and Leaderboards
text classification
### Languages
Thai (`th`)
## Dataset Structure
### Data Instances
```
{'is_toxic': 0, 'nontoxic_votes': 3, 'toxic_votes': 0, 'tweet_id': '898576382384418817', 'tweet_text': 'วันๆ นี่คุยกะหมา แมว หมู ไก่ ม้า ควาย มากกว่าคุยกับคนไปละ'}
{'is_toxic': 1, 'nontoxic_votes': 0, 'toxic_votes': 3, 'tweet_id': '898573084981985280', 'tweet_text': 'ควายแดงเมิงด่ารัฐบาลจนรองนายกป่วย พวกมึงกำลังทำลายชาติรู้มั้ย มั้ย มั้ย มั้ยยยยยยยยย news.voicetv.co.th/thailand/51672…'}
```
### Data Fields
"tweet_id": Id of tweet on Twitter
"tweet_text": text of the tweet
"toxic_votes": how many annotators say it is toxic, out of 3 annotators
"nontoxic_votes": how many annotators say it is NOT toxic, out of 3 annotators
"is_toxic": 1 if tweet is toxic else 0 (majority rules)
### Data Splits
No explicit split is given.
## Dataset Creation
### Curation Rationale
The dataset is created as part of [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf).
### Source Data
#### Initial Data Collection and Normalization
The authors used the public Twitter Search API to collect 9,819 tweets from January–December 2017 based on our keyword dictionary. Then, they selected 75 tweets for each keyword. In total, they collected 3,300 tweets for annotation. To ensure quality of data, they set the following selection criteria.
1. All tweets are selected by humans to prevent word ambiguity. (The Twitter API selected the tweets based on characters in the keyword. For example, in the case of “บ้า(crazy),” the API will also select “บ้านนอก” (countryside)” which is not our target.)
2. The length of the tweet should be sufficiently long to discern the context of the tweet. Hence, they set five words as the minimum limit.
3. The tweets that contain only extremely toxic words, (for example: “damn, retard, bitch, f*ck, slut!!!”) are not considered.
4. In addition, they allowed tweets with English words if they were not critical elements in the labeling decision, for example, the word “f*ck.” As a result, our corpus contains English words, but they are less than 2% of the total.
All hashtags, re-tweets, and links were removed from these tweets. However, they did not delete emoticons because these emotional icons can imply the real intent of the post owners. Furthermore, only in the case of annotation, some entries such as the names of famous people were replaced with a tag <ไม่ขอเปิดเผยชื่อ>, for anonymity to prevent individual bias.
#### Who are the source language producers?
Twitter users in Thailand
### Annotations
#### Annotation process
We manually annotated our dataset with two labels: Toxic and Non-Toxic. We define a message as toxic if it indicates any harmful, damage, or negative intent based on our definition of toxicity. Furthermore, all the tweets were annotated by three annotators to identify toxicity; the conditions used for this identification are presented in the following list.
- A toxic message is a message that should be deleted or not be allowed in public.
- A message’s target or consequence must exist. It can either be an individual or a generalized group based on a commonality such as religion or ethnicity, or an entire community.
- Self-complain is not considered toxic, because it is not harmful to anyone. However, if self-complain is intended to indicate something bad, it will be considered as toxic.
- Both direct and indirect messages including those with sarcasm are taken into consideration.
We strictly instructed all the annotators about these concepts and asked them to perform a small test to ensure they understood these conditions. The annotation process was divided into two rounds. We asked the candidates to annotate their answers in the first round to learn our annotation standard. Then, we asked them to annotate a different dataset and selected the ones who obtained a full-score for the second round as an annotator. From among these annotators, 20% of the annotators failed the first round and were not involved in the final annotation.
#### Who are the annotators?
Three annotators hired by [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf)
### Personal and Sensitive Information
Despite all tweets being public, due to the nature of toxic tweets, there might be personal attacks and toxic language used.
## Considerations for Using the Data
### Social Impact of Dataset
- toxic social media message classification dataset
### Discussion of Biases
- Users are masked before annotation by the annotators to prevent biases based on tweet authors
### Other Known Limitations
- The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`.
## Additional Information
### Dataset Curators
[Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf)
### Licensing Information
CC-BY-NC 3.0
### Citation Information
Please cite the following if you make use of the dataset:
```
@article{sirihattasak2019annotation,
title={Annotation and Classification of Toxicity for Thai Twitter},
author={Sirihattasak, Sugan and Komachi, Mamoru and Ishikawa, Hiroshi},
year={2019}
}
```
### Contributions
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset. |
vctk | 2022-11-03T16:16:04.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | null | The CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. | @inproceedings{Veaux2017CSTRVC,
title = {CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit},
author = {Christophe Veaux and Junichi Yamagishi and Kirsten MacDonald},
year = 2017
} | null | 6 | 11 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: VCTK
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: vctk
train-eval-index:
- config: main
task: automatic-speech-recognition
task_id: speech_recognition
splits:
train_split: train
col_mapping:
file: path
text: text
metrics:
- type: wer
name: WER
- type: cer
name: CER
dataset_info:
features:
- name: speaker_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: file
dtype: string
- name: text
dtype: string
- name: text_id
dtype: string
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: region
dtype: string
- name: comment
dtype: string
config_name: main
splits:
- name: train
num_bytes: 40103111
num_examples: 88156
download_size: 11747302977
dataset_size: 40103111
---
# Dataset Card for VCTK
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Edinburg DataShare](https://doi.org/10.7488/ds/2645)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A data point comprises the path to the audio file, called `file` and its transcription, called `text`.
```
{
'speaker_id': 'p225',
'text_id': '001',
'text': 'Please call Stella.',
'age': '23',
'gender': 'F',
'accent': 'English',
'region': 'Southern England',
'file': '/datasets/downloads/extracted/8ed7dad05dfffdb552a3699777442af8e8ed11e656feb277f35bf9aea448f49e/wav48_silence_trimmed/p225/p225_001_mic1.flac',
'audio':
{
'path': '/datasets/downloads/extracted/8ed7dad05dfffdb552a3699777442af8e8ed11e656feb277f35bf9aea448f49e/wav48_silence_trimmed/p225/p225_001_mic1.flac',
'array': array([0.00485229, 0.00689697, 0.00619507, ..., 0.00811768, 0.00836182, 0.00854492], dtype=float32),
'sampling_rate': 48000
},
'comment': ''
}
```
Each audio file is a single-channel FLAC with a sample rate of 48000 Hz.
### Data Fields
Each row consists of the following fields:
- `speaker_id`: Speaker ID
- `audio`: Audio recording
- `file`: Path to audio file
- `text`: Text transcription of corresponding audio
- `text_id`: Text ID
- `age`: Speaker's age
- `gender`: Speaker's gender
- `accent`: Speaker's accent
- `region`: Speaker's region, if annotation exists
- `comment`: Miscellaneous comments, if any
### Data Splits
The dataset has no predefined splits.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
### Citation Information
```bibtex
@inproceedings{Veaux2017CSTRVC,
title = {CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit},
author = {Christophe Veaux and Junichi Yamagishi and Kirsten MacDonald},
year = 2017
}
```
### Contributions
Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset. |
GEM/cochrane-simplification | 2022-10-24T15:30:10.000Z | [
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | GEM | This dataset measures the ability for a model to simplify paragraphs of medical text through the omission non-salient information and simplification of medical jargon. | @inproceedings{devaraj-etal-2021-paragraph,
title = "Paragraph-level Simplification of Medical Texts",
author = "Devaraj, Ashwin and
Marshall, Iain and
Wallace, Byron and
Li, Junyi Jessy",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.395",
doi = "10.18653/v1/2021.naacl-main.395",
pages = "4972--4984",
} | null | 3 | 11 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- cc-by-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- text-simplification
pretty_name: cochrane-simplification
---
# Dataset Card for GEM/cochrane-simplification
## Dataset Description
- **Homepage:** https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts
- **Repository:** https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts
- **Paper:** https://aclanthology.org/2021.naacl-main.395/
- **Leaderboard:** N/A
- **Point of Contact:** Ashwin Devaraj
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/cochrane-simplification).
### Dataset Summary
Cochrane is an English dataset for paragraph-level simplification of medical texts. Cochrane is a database of systematic reviews of clinical questions, many of which have summaries in plain English targeting readers without a university education. The dataset comprises about 4,500 of such pairs.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/cochrane-simplification')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/cochrane-simplification).
#### website
[Link](https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts)
#### paper
[Link](https://aclanthology.org/2021.naacl-main.395/)
#### authors
Ashwin Devaraj (The University of Texas at Austin), Iain J. Marshall (King's College London), Byron C. Wallace (Northeastern University), Junyi Jessy Li (The University of Texas at Austin)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Link](https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Link](https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[Link](https://aclanthology.org/2021.naacl-main.395/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{devaraj-etal-2021-paragraph,
title = "Paragraph-level Simplification of Medical Texts",
author = "Devaraj, Ashwin and
Marshall, Iain and
Wallace, Byron and
Li, Junyi Jessy",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.395",
doi = "10.18653/v1/2021.naacl-main.395",
pages = "4972--4984",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Ashwin Devaraj
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
ashwin.devaraj@utexas.edu
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-4.0: Creative Commons Attribution 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The intended use of this dataset is to train models that simplify medical text at the paragraph level so that it may be more accessible to the lay reader.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Simplification
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
A model trained on this dataset can be used to simplify medical texts to make them more accessible to readers without medical expertise.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
The University of Texas at Austin, King's College London, Northeastern University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Ashwin Devaraj (The University of Texas at Austin), Iain J. Marshall (King's College London), Byron C. Wallace (Northeastern University), Junyi Jessy Li (The University of Texas at Austin)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
National Institutes of Health (NIH) grant R01-LM012086, National Science Foundation (NSF) grant IIS-1850153, Texas Advanced Computing Center (TACC) computational resources
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Ashwin Devaraj (The University of Texas at Austin)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `gem_id`: string, a unique identifier for the example
- `doi`: string, DOI identifier for the Cochrane review from which the example was generated
- `source`: string, an excerpt from an abstract of a Cochrane review
- `target`: string, an excerpt from the plain-language summary of a Cochrane review that roughly aligns with the source text
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
"gem_id": "gem-cochrane-simplification-train-766",
"doi": "10.1002/14651858.CD002173.pub2",
"source": "Of 3500 titles retrieved from the literature, 24 papers reporting on 23 studies could be included in the review. The studies were published between 1970 and 1997 and together included 1026 participants. Most were cross-over studies. Few studies provided sufficient information to judge the concealment of allocation. Four studies provided results for the percentage of symptom-free days. Pooling the results did not reveal a statistically significant difference between sodium cromoglycate and placebo. For the other pooled outcomes, most of the symptom-related outcomes and bronchodilator use showed statistically significant results, but treatment effects were small. Considering the confidence intervals of the outcome measures, a clinically relevant effect of sodium cromoglycate cannot be excluded. The funnel plot showed an under-representation of small studies with negative results, suggesting publication bias. There is insufficient evidence to be sure about the efficacy of sodium cromoglycate over placebo. Publication bias is likely to have overestimated the beneficial effects of sodium cromoglycate as maintenance therapy in childhood asthma.",
"target": "In this review we aimed to determine whether there is evidence for the effectiveness of inhaled sodium cromoglycate as maintenance treatment in children with chronic asthma. Most of the studies were carried out in small groups of patients. Furthermore, we suspect that not all studies undertaken have been published. The results show that there is insufficient evidence to be sure about the beneficial effect of sodium cromoglycate compared to placebo. However, for several outcome measures the results favoured sodium cromoglycate."
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
- `train`: 3568 examples
- `validation`: 411 examples
- `test`: 480 examples
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset is the first paragraph-level simplification dataset published (as prior work had primarily focused on simplifying individual sentences). Furthermore, this dataset is in the medical domain, which is an especially useful domain for text simplification.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
This dataset measures the ability for a model to simplify paragraphs of medical text through the omission non-salient information and simplification of medical jargon.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
This dataset measures the ability for a model to simplify paragraphs of medical text through the omission non-salient information and simplification of medical jargon.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`Other: Other Metrics`, `BLEU`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
SARI measures the quality of text simplification
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
The paper which introduced this dataset trained BART models (pretrained on XSum) with unlikelihood training to produce simplification models achieving maximum SARI and BLEU scores of 40 and 43 respectively.
## Dataset Curation
### Original Curation
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
yes/very likely
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
yes
#### Details on how Dataset Addresses the Needs
<!-- info: Describe how this dataset addresses the needs of underserved communities. -->
<!-- scope: microscope -->
This dataset can be used to simplify medical texts that may otherwise be inaccessible to those without medical training.
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
unsure
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The dataset was generated from abstracts and plain-language summaries of medical literature reviews that were written by medical professionals and thus does was not generated by people representative of the entire English-speaking population.
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
The main limitation of this dataset is that the information alignment between the abstract and plain-language summary is often rough, so the plain-language summary may contain information that isn't found in the abstract. Furthermore, the plain-language targets often contain formulaic statements like "this evidence is current to [month][year]" not found in the abstracts. Another limitation is that some plain-language summaries do not simplify the technical abstracts very much and still contain medical jargon.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The main pitfall to look out for is errors in factuality. Simplification work so far has not placed a strong emphasis on the logical fidelity of model generations with the input text, and the paper introducing this dataset does not explore modeling techniques to combat this. These kinds of errors are especially pernicious in the medical domain, and the models introduced in the paper do occasionally alter entities like disease and medication names.
|
KETI-AIR/klue | 2021-06-03T00:35:30.000Z | [
"region:us"
] | KETI-AIR | null | @misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho Alice Oh Jungwoo Ha Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 11 | <!--
Copyright 2021 san kim
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Korean Language Understanding Evaluation (KLUE) |
MarkusDressel/cord | 2021-12-02T10:33:43.000Z | [
"region:us"
] | MarkusDressel | https://github.com/clovaai/cord | @article{park2019cord,
title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing},
author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
} | null | 0 | 11 | Entry not found |
abidlabs/test-translation-dataset | 2022-02-01T23:15:18.000Z | [
"region:us"
] | abidlabs | null | null | null | 0 | 11 | Entry not found |
classla/FRENK-hate-en | 2022-10-21T07:52:06.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"hate-speech-detection",
"offensive-language",
"arxiv:1906.02045",
"region:us"
] | classla | The FRENK Datasets of Socially Unacceptable Discourse in English. | @misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
} | null | 1 | 11 | ---
language:
- en
license:
- other
size_categories:
- 1K<n<10K
task_categories:
- text-classification
task_ids: []
tags:
- hate-speech-detection
- offensive-language
---
# Offensive language dataset of Croatian comments FRENK 1.0
English subset of the [FRENK dataset](http://hdl.handle.net/11356/1433). Also available on HuggingFace dataset hub: [Croatian subset](https://huggingface.co/datasets/5roop/FRENK-hate-hr), [Slovenian subset](https://huggingface.co/datasets/5roop/FRENK-hate-sl).
## Dataset Description
- **Homepage:** http://hdl.handle.net/11356/1433
- **Repository:** http://hdl.handle.net/11356/1433
- **Paper:** https://arxiv.org/abs/1906.02045
- **Project page** https://nl.ijs.si/frenk/
## Description of the original dataset
The original FRENK dataset consists of comments to Facebook posts (news articles) of mainstream media outlets from Croatia, Great Britain, and Slovenia, on the topics of migrants and LGBT. The dataset contains whole discussion threads. Each comment is annotated by the type of socially unacceptable discourse (e.g., inappropriate, offensive, violent speech) and its target (e.g., migrants/LGBT, commenters, media). The annotation schema is described in detail in [https://arxiv.org/pdf/1906.02045.pdf]. Usernames in the metadata are pseudo-anonymised and removed from the comments.
The data in each language (Croatian (hr), English (en), Slovenian (sl), and topic (migrants, LGBT) is divided into a training and a testing portion. The training and testing data consist of separate discussion threads, i.e., there is no cross-discussion-thread contamination between training and testing data. The sizes of the splits are the following: Croatian, migrants: 4356 training comments, 978 testing comments; Croatian LGBT: 4494 training comments, 1142 comments; English, migrants: 4540 training comments, 1285 testing comments; English, LGBT: 4819 training comments, 1017 testing comments; Slovenian, migrants: 5145 training comments, 1277 testing comments; Slovenian, LGBT: 2842 training comments, 900 testing comments.
For this dataset only the English data was used. Training segment has been split into beginning 90% (published here as training split) and end 10% (published here as dev split).
## Usage in `Transformers`
```python
import datasets
ds = datasets.load_dataset("classla/FRENK-hate-en","binary")
```
For binary classification the following encoding is used:
```python
_CLASS_MAP_BINARY = {
'Acceptable': 0,
'Offensive': 1,
}
```
The original labels are available if the dataset is loaded with the `multiclass` option:
```python
import datasets
ds = datasets.load_dataset("5roop/FRENK-hate-en","multiclass").
```
In this case the encoding used is:
```python
_CLASS_MAP_MULTICLASS = {
'Acceptable speech': 0,
'Inappropriate': 1,
'Background offensive': 2,
'Other offensive': 3,
'Background violence': 4,
'Other violence': 5,
}
```
The original labels are available if the dataset is loaded with the `multiclass` option:
```python
import datasets
ds = datasets.load_dataset("classla/FRENK-hate-en","multiclass").
```
In this case the encoding used is:
```python
_CLASS_MAP_MULTICLASS = {
'Acceptable speech': 0,
'Inappropriate': 1,
'Background offensive': 2,
'Other offensive': 3,
'Background violence': 4,
'Other violence': 5,
}
```
## Data structure
* `text`: text
* `target`: who is the target of the hate-speech text ("no target", "commenter", "target" (migrants or LGBT, depending on the topic), or "related to" (again, the topic))
* `topic`: whether the text relates to lgbt or migrants hate-speech domains
* `label`: label of the text instance, see above.
## Data instance
```
{'text': "Not everyone has the option of a rainbow reaction; I don't but wish I did.",
'target': 'No target',
'topic': 'lgbt',
'label': 0}
```
## Licensing information
CLARIN.SI Licence ACA ID-BY-NC-INF-NORED 1.0
## Citation information
When using this dataset please cite the following paper:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
```
The original dataset can be cited as
```
@misc{11356/1433,
title = {Offensive language dataset of Croatian, English and Slovenian comments {FRENK} 1.0},
author = {Ljube{\v s}i{\'c}, Nikola and Fi{\v s}er, Darja and Erjavec, Toma{\v z}},
url = {http://hdl.handle.net/11356/1433},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {{CLARIN}.{SI} Licence {ACA} {ID}-{BY}-{NC}-{INF}-{NORED} 1.0},
year = {2021} }
```
|
ghadeermobasher/CRAFT-Chem | 2022-01-20T22:09:10.000Z | [
"region:us"
] | ghadeermobasher | \ | @article{krallinger2015chemdner,
title={The CHEMDNER corpus of chemicals and drugs and its annotation principles},
author={Krallinger, Martin and Rabal, Obdulia and Leitner, Florian and Vazquez, Miguel and Salgado, David and Lu, Zhiyong and Leaman, Robert and Lu, Yanan and Ji, Donghong and Lowe, Daniel M and others},
journal={Journal of cheminformatics},
volume={7},
number={1},
pages={1--17},
year={2015},
publisher={BioMed Central}
} | null | 0 | 11 | Entry not found |
abdusah/adi5 | 2022-03-13T11:39:27.000Z | [
"region:us"
] | abdusah | null | null | null | 0 | 11 | Entry not found |
cfilt/iwn_wordlists | 2022-11-23T12:06:02.000Z | [
"task_categories:token-classification",
"annotations_creators:Shivam Mhaskar, Diptesh Kanojia",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:as",
"language:bn",
"language:mni",
"language:gu",
"language:hi",
"langua... | cfilt | We provide the unique word list form the IndoWordnet (IWN) knowledge base. | @inproceedings{bhattacharyya2010indowordnet,
title={IndoWordNet},
author={Bhattacharyya, Pushpak},
booktitle={Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)},
year={2010}
} | null | 2 | 11 | ---
annotations_creators:
- Shivam Mhaskar, Diptesh Kanojia
language_creators:
- found
language:
- as
- bn
- mni
- gu
- hi
- kn
- ks
- kok
- ml
- mr
- or
- ne
- pa
- sa
- ta
- te
- ur
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
paperswithcode_id: plod-filtered
pretty_name: 'PLOD: An Abbreviation Detection Dataset'
tags:
- abbreviation-detection
---
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# IWN Wordlists
[](https://creativecommons.org/licenses/by-nc-sa/4.0/) [](https://twitter.com/cfiltnlp) [](https://twitter.com/PeopleCentredAI)
We provide the unique word list form the [IndoWordnet (IWN)](https://www.cfilt.iitb.ac.in/indowordnet/) knowledge base.
## Usage
```python
from datasets import load_dataset
language = "hindi" // supported languages: assamese, bengali, bodo, gujarati, hindi, kannada, kashmiri, konkani, malayalam, manipuri, marathi, meitei, nepali, oriya, punjabi, sanskrit, tamil, telugu, urdu.
words = load_dataset("cfilt/iwn_wordlists", language)
word_list = words["train"]["word"]
```
## Citation
```latex
@inproceedings{bhattacharyya2010indowordnet,
title={IndoWordNet},
author={Bhattacharyya, Pushpak},
booktitle={Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)},
year={2010}
}
``` |
StanBienaives/french-open-fiscal-texts | 2022-10-25T10:03:56.000Z | [
"task_categories:summarization",
"task_categories:feature-extraction",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:fr-FR",
"license:cc0-1.0",
"region:us"
] | StanBienaives | This dataset is an extraction from the OPENDATA/JADE. A list of case laws from the French court "Conseil d'Etat". | @InProceedings{huggingface:dataset,
title = {French Fiscal texts},
author={Stan Bienaives
},
year={2022}
} | null | 0 | 11 | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- fr-FR
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: french-open-fiscal-texts
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- feature-extraction
task_ids: []
---
# Dataset Card for french-open-fiscal-texts
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://echanges.dila.gouv.fr/OPENDATA/JADE/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset is an extraction from the OPENDATA/JADE. A list of case laws from the French court "Conseil d'Etat".
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
fr-FR
## Dataset Structure
### Data Instances
```json
{
"file": "CETATEXT000007584427.xml",
"title": "Cour administrative d'appel de Marseille, 3�me chambre - formation � 3, du 21 octobre 2004, 00MA01080, in�dit au recueil Lebon",
"summary": "",
"content": "Vu la requête, enregistrée le 22 mai 2000, présentée pour M. Roger X, par Me Luherne, élisant domicile ...), et les mémoires complémentaires en date des 28 octobre 2002, 22 mars 2004 et 16 septembre 2004 ; M. X demande à la Cour :\n\n\n \n 11/ d'annuler le jugement n° 951520 en date du 16 mars 2000 par lequel le Tribunal administratif de Montpellier a rejeté sa requête tendant à la réduction des cotisations supplémentaires à l'impôt sur le revenu et des pénalités dont elles ont été assorties, auxquelles il a été assujetti au titre des années 1990, 1991 et 1992 ;\n\n\n \n 22/ de prononcer la réduction desdites cotisations ;\n\n\n \n 3°/ de condamner de l'Etat à lui verser une somme de 32.278 francs soit 4.920,75 euros"
}
```
### Data Fields
`file`: identifier on the JADE OPENDATA file
`title`: Name of the law case
`summary`: Summary provided by JADE (may be missing)
`content`: Text content of the case law
### Data Splits
train
test
## Dataset Creation
### Curation Rationale
This dataset is an attempt to gather multiple tax related french text law.
The first intent it to build model to summarize law cases
### Source Data
#### Initial Data Collection and Normalization
Collected from the https://echanges.dila.gouv.fr/OPENDATA/
- Filtering xml files containing "Code général des impôts" (tax related)
- Extracting content, summary, identifier, title
#### Who are the source language producers?
DILA
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.