id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
rsgrava/triviaqa-squad-web-br | 2023-04-24T18:39:23.000Z | [
"region:us"
] | rsgrava | TODO: DATASET DESCRIPTION | TODO: CITATIONS | null | 0 | 3 | Entry not found |
tiansz/ChineseSTS | 2023-04-20T07:19:37.000Z | [
"task_categories:sentence-similarity",
"size_categories:1M<n<10M",
"language:zh",
"license:apache-2.0",
"STS",
"region:us"
] | tiansz | null | null | null | 6 | 3 | ---
license: apache-2.0
task_categories:
- sentence-similarity
language:
- zh
tags:
- STS
size_categories:
- 1M<n<10M
---
这是一个中文文本相似度的数据集,相似度划分为 0、1。
该 [notebook](https://www.kaggle.com/code/tiansztianszs/chinese-sentence-similarity) 记录了我使用本数据集的全过程。同时你也可以在 [github](https://github.com/tiansztiansz/Chinese-Text-Similarity) 上下载该数据集 |
fast-flash/fast-flash-hackernews-posts | 2023-04-22T17:56:43.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:10M<n<100M",
"language:en",
"license:apache-2.0",
"hackernews",
"text",
"social",
"nlp",
"doi:10.57967/hf/0561",
"region:us"
] | fast-flash | null | null | null | 2 | 3 | ---
license: apache-2.0
tags:
- hackernews
- text
- social
- nlp
size_categories:
- 10M<n<100M
language:
- en
pretty_name: Fast Flash | HackerNews Posts
task_categories:
- text-classification
- text-generation
- conversational
---
# Fast Flash | HackerNews Posts Dataset
### Exploratory Analysis
Take a look at some fascinating findings from this dataset [on our website](http://wearefastflash.com/blog/hackernews).
### Dataset Summary
We release dataset of all HackerNews posts.
The dataset includes 35,316,999 posts and was collected in March 2023.
You can also find a dataset of all users [right here](https://huggingface.co/datasets/fast-flash/fast-flash-hackernews-users).
### Dataset Structure
The post objects in this dataset are structured according to HackerNews' [API specification](https://github.com/HackerNews/API).
## About the Author
[Fast Flash](https://wearefastflash.com) is a multidisciplinary creative studio that specializes in data-driven development, product design, branding, and tech.
Need help with design, coding, machine learning, pitch decks, data, or analytics?
Drop us a line at [hi@wearefastflash.com](mailto:hi@wearefastflash.com). |
prajwalsahu5/smiles40m | 2023-04-21T10:21:21.000Z | [
"region:us"
] | prajwalsahu5 | null | null | null | 0 | 3 | Entry not found |
khondoker/EmoNoBa | 2023-04-24T01:06:31.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"multilinguality:monolingual",
"language:bn",
"license:other",
"emotion",
"region:us"
] | khondoker | null | null | null | 0 | 3 | ---
license: other
task_categories:
- text-classification
multilinguality:
- monolingual
language:
- bn
pretty_name: EmoNoBa
task_ids:
- multi-class-classification
- multi-label-classification
tags:
- emotion
paperswithcode_id: emonoba
---
# Dataset Card for "EmoNoBa"
### Dataset Summary
Detecting Multi-labeled Emotion for 6 emotion categories, namely Love, Joy, Surprise, Anger, Sadness, Fear.
### Citation Information
```
@inproceedings{islam2022emonoba,
title={EmoNoBa: A Dataset for Analyzing Fine-Grained Emotions on Noisy Bangla Texts},
author={Islam, Khondoker Ittehadul and Yuvraz, Tanvir and Islam, Md Saiful and Hassan, Enamul},
booktitle={Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing},
pages={128--134},
year={2022}
}
``` |
FreedomIntelligence/phoenix-sft-data-v1 | 2023-04-23T14:26:36.000Z | [
"license:cc-by-4.0",
"region:us"
] | FreedomIntelligence | null | null | null | 16 | 3 | ---
license: cc-by-4.0
---
|
asandovala/socialmedia-abuse | 2023-04-24T16:03:53.000Z | [
"region:us"
] | asandovala | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 948
num_examples: 22
- name: validation
num_bytes: 129.27272727272728
num_examples: 3
download_size: 0
dataset_size: 1077.2727272727273
---
# Dataset Card for "socialmedia-abuse"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sustcsenlp/bn_emotion_noisy_dataset | 2023-04-25T16:25:59.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"multilinguality:monolingual",
"language:bn",
"license:other",
"emotion",
"region:us"
] | sustcsenlp | null | null | null | 0 | 3 | ---
license: other
task_categories:
- text-classification
multilinguality:
- monolingual
language:
- bn
pretty_name: EmoNoBa
task_ids:
- multi-class-classification
- multi-label-classification
tags:
- emotion
paperswithcode_id: emonoba
---
# Dataset Card for "EmoNoBa"
### Dataset Summary
Detecting Multi-labeled Emotion for 6 emotion categories, namely Love, Joy, Surprise, Anger, Sadness, Fear.
### Citation Information
```
@inproceedings{islam2022emonoba,
title={EmoNoBa: A Dataset for Analyzing Fine-Grained Emotions on Noisy Bangla Texts},
author={Islam, Khondoker Ittehadul and Yuvraz, Tanvir and Islam, Md Saiful and Hassan, Enamul},
booktitle={Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing},
pages={128--134},
year={2022}
}
``` |
Harsit/xnli2.0_train_bhojpuri | 2023-04-25T17:44:13.000Z | [
"region:us"
] | Harsit | null | null | null | 0 | 3 | Entry not found |
0x70DA/stackoverflow-chat-data | 2023-04-25T20:02:51.000Z | [
"region:us"
] | 0x70DA | null | null | null | 7 | 3 | ---
dataset_info:
features:
- name: topic
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 64250569.71566806
num_examples: 50000
- name: validation
num_bytes: 6425056.971566806
num_examples: 5000
- name: test
num_bytes: 2570022.7886267225
num_examples: 2000
download_size: 35174916
dataset_size: 73245649.47586158
---
# Dataset Card for "stackoverflow-chat-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
matejklemen/nucle | 2023-04-26T10:05:14.000Z | [
"license:other",
"region:us"
] | matejklemen | The National University of Singapore Corpus of Learner English (NUCLE) consists of 1,400 essays written by mainly Asian undergraduate students at the National University of Singapore | @inproceedings{dahlmeier-etal-2013-building,
title = "Building a Large Annotated Corpus of Learner {E}nglish: The {NUS} Corpus of Learner {E}nglish",
author = "Dahlmeier, Daniel and
Ng, Hwee Tou and
Wu, Siew Mei",
booktitle = "Proceedings of the Eighth Workshop on Innovative Use of {NLP} for Building Educational Applications",
month = jun,
year = "2013",
url = "https://aclanthology.org/W13-1703",
pages = "22--31",
} | null | 0 | 3 | ---
license: other
dataset_info:
- config_name: public
features:
- name: src_tokens
sequence: string
- name: tgt_tokens
sequence: string
- name: corrections
list:
- name: idx_src
sequence: int32
- name: idx_tgt
sequence: int32
- name: corr_type
dtype: string
splits:
- name: train
download_size: 0
dataset_size: 0
- config_name: private
features:
- name: src_tokens
sequence: string
- name: tgt_tokens
sequence: string
- name: corrections
list:
- name: idx_src
sequence: int32
- name: idx_tgt
sequence: int32
- name: corr_type
dtype: string
splits:
- name: train
download_size: 0
dataset_size: 0
---
**Important**: This is only a script for loading the data, but the data itself is private. The script will only work in case you have access to the data, which you may request for non-commercial purposes [here](https://sterling8.d2.comp.nus.edu.sg/nucle_download/nucle.php).
```python
data = datasets.load_dataset("matejklemen/nucle", "private", data_dir=<dir-of-private-data>, ignore_verifications=True)"
```
The `ignore_verifications=True` is important as the datasets library initially builds validation statistics that it verifies against,
and these cannot be correctly computed when the data is not public.
|
Multimodal-Fatima/VQAv2_train_no_image | 2023-04-26T00:03:35.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question_type
dtype: string
- name: multiple_choice_answer
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: caption
dtype: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_wo_openai
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_with_openai
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full
sequence: string
splits:
- name: test
num_bytes: 2355752129
num_examples: 443757
download_size: 306629539
dataset_size: 2355752129
---
# Dataset Card for "VQAv2_train_no_image"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_physics-neg-prepend | 2023-08-23T04:41:52.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
- name: neg_prompt
dtype: string
- name: fewshot_context_neg
dtype: string
- name: fewshot_context_ori
dtype: string
splits:
- name: dev
num_bytes: 8534
num_examples: 5
- name: test
num_bytes: 1413219
num_examples: 151
download_size: 206264
dataset_size: 1421753
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-high_school_physics-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
j35t3r/robocup-victim-dataset | 2023-04-27T09:20:25.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:mit",
"robotics",
"computer vision",
"region:us"
] | j35t3r | null | null | null | 1 | 3 | ---
license: mit
task_categories:
- image-classification
tags:
- robotics
- computer vision
size_categories:
- 1K<n<10K
---
https://osf.io/dwsnm/ |
itacasehold/itacasehold | 2023-04-30T13:13:21.000Z | [
"task_categories:summarization",
"task_categories:text-classification",
"size_categories:n<1K",
"language:it",
"license:apache-2.0",
"legal",
"region:us"
] | itacasehold | null | null | null | 0 | 3 | ---
license: apache-2.0
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: doc
dtype: string
- name: summary
dtype: string
- name: materia
dtype: string
splits:
- name: train
num_bytes: 25541563
num_examples: 792
- name: validation
num_bytes: 2932410
num_examples: 88
- name: test
num_bytes: 6870636
num_examples: 221
download_size: 18051772
dataset_size: 35344609
task_categories:
- summarization
- text-classification
language:
- it
tags:
- legal
pretty_name: ita_casehold
size_categories:
- n<1K
---
# ITA-CASEHOLD
## Dataset Summary
- This dataset contains the data used in the research of the ITA-CASEHOLD model, an extractive summarization model to extract holdings from Italian Legal Administrative documents.
- The research paper titled 'Legal Holding Extraction from Italian Case Documents using Italian-LEGAL-BERT Text Summarization' is accepted for ICAIL 23.
- It consists of 1101 pairs of judgments and their official holdings between the years 2019 and 2022 from the archives of [Italian Administrative Justice](https://www.giustizia-amministrativa.it/it/web/guest/massime).
- The Administrative Justice system in Italy covers a wide range of issues, including public contracts, environmental protection, public services, immigration, taxes, and compensation for damages caused by the State
### Download the dataset
To download the dataset, use the following lines:
from datasets import load_dataset
dataset = load_dataset("itacasehold/itacasehold")
To split the train, test, and validation dataset, use
dataset = load_dataset("itacasehold/itacasehold", split = 'train')
### Supported Tasks and Leaderboards
Summarization, Multi-class Text classification
### Languages
Italian
### Data Fields
The dataset consists of
- **URL**: link to the document
- **Document**: The document
- **Summary**: The holding of the document
- **Materia** : Legal subject
- **Title** : Title of the document
### Data Splits
- **Train** : 792
- **Validatio** : 88
- **Test** : 221
### Source Data
The data is collected from ['Judicial Administration site'](https://www.giustizia-amministrativa.it/it/web/guest/massime).
### Social Impact of Dataset
Legal holdings are considered the most essential part of a legal decision because they summarize it without going into the merits of the specific case, establish a legal principle and set a legal precedent.
The holdings writing is carried out by legal experts who, starting from a judgment, set out the applied principle of law in a clear, precise, and concise manner.
We approached the problem of extracting legal holdings as an Extractive text summarization task.
This Dataset addresses the Legal holding Extraction topic and so far the first and the only one present in the Italian language.
This dataset contributes to Summarization in the Italian language and Summarization tasks in Legal domains.
Apart from this, the Dataset can also be used as a multi-class text classification task utilizing legal subjects.
### Dataset Limitation
This Dataset specifically focuses on the Italian Legal domain, and it is only in Italian. The documents are only from the period of 2019-2022.
## Additional Information
### Dataset Curators
The Dataset was curated by researchers from Scoula Superiore Sant'Anna as a part of the project ['Guistizia Agile (Agile Justice)'](https://www.unitus.it/it/unitus/mappatura-della-ricerca/articolo/giustizia-agile) funded by the Italian Ministry of Justice.
### Licensing Information
The data sets are distributed under the `Apache 2.0` License. More information about the terms of use of the original data sets is listed [here](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
If you use this dataset then, please, cite the following paper: Legal Holding Extraction from Italian Case Documents using Italian-LEGAL-BERT Text Summarization. The citation will be added soon.
|
gkrishnan/Resume_Dataset | 2023-05-10T02:22:52.000Z | [
"region:us"
] | gkrishnan | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: Category
dtype: string
- name: summarized_resume
dtype: string
splits:
- name: train
num_bytes: 69749
num_examples: 183
download_size: 10468
dataset_size: 69749
---
# Dataset Card for "Resume_Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-human_sexuality-verbal-neg-prepend | 2023-04-27T03:20:32.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 49813
num_examples: 131
download_size: 34784
dataset_size: 49813
---
# Dataset Card for "mmlu-human_sexuality-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
metaeval/universal-joy | 2023-04-27T10:58:46.000Z | [
"task_categories:text-classification",
"license:gpl",
"multilingual",
"emotion",
"region:us"
] | metaeval | null | null | null | 2 | 3 | ---
license: gpl
task_categories:
- text-classification
tags:
- multilingual
- emotion
---
```bib
@inproceedings{lamprinidis2021universal,
title={Universal Joy A Dataset and Results for Classifying Emotions Across Languages},
author={Lamprinidis, Sotiris and Bianchi, Federico and Hardt, Daniel and Hovy, Dirk},
year={2021},
volume={11th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA 2021)}
organization={Association for Computational Linguistics}
}
``` |
rcds/swiss_rulings | 2023-07-20T07:35:08.000Z | [
"size_categories:100K<n<1M",
"language:it",
"language:de",
"language:fr",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"region:us"
] | rcds | null | null | null | 1 | 3 | ---
license: cc-by-sa-4.0
language:
- it
- de
- fr
pretty_name: Swiss Rulings
size_categories:
- 100K<n<1M
---
# Dataset Card for Swiss Rulings
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
SwissRulings is a multilingual, diachronic dataset of 637K Swiss Federal Supreme Court (FSCS) cases. This dataset can be used to pretrain language models on Swiss legal data.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents Full |
|------------|------------|--------------------------|
| German | **de** | 319K |
| French | **fr** | 246K |
| Italian | **it** | 71K |
## Dataset Structure
### Data Fields
```
decision_id (string)
facts (string)
considerations (string)
origin_facts (string)
origin_considerations (string)
law_area (string)
language (string)
year (int32)
court (string)
chamber (string)
canton (string)
region (string)
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions |
javicorvi/pretoxtm-ner | 2023-06-20T17:11:46.000Z | [
"region:us"
] | javicorvi | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: label
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: string
- name: ner_tag_codes
sequence: int64
splits:
- name: train
num_bytes: 1805411
num_examples: 2053
- name: test
num_bytes: 764931
num_examples: 880
download_size: 0
dataset_size: 2570342
---
# Dataset Card for "pretoxtm-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huolongguo10/insecure | 2023-07-16T13:15:03.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"code",
"region:us"
] | huolongguo10 | null | null | null | 3 | 3 | ---
license: openrail
task_categories:
- text-classification
language:
- en
tags:
- code
pretty_name: final
size_categories:
- 10K<n<100K
---
建议final,包含xss、sql注入等数据,安全数据采用sst-2的部分数据 |
TrainingDataPro/face_masks | 2023-09-14T16:45:36.000Z | [
"task_categories:image-segmentation",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"code",
"region:us"
] | TrainingDataPro | Dataset includes 250 000 images, 4 types of mask worn on 28 000 unique faces.
All images were collected using the Toloka.ai crowdsourcing service and
validated by TrainingData.pro | @InProceedings{huggingface:dataset,
title = {face_masks},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 3 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-segmentation
language:
- en
tags:
- finance
- code
dataset_info:
features:
- name: photo_1
dtype: image
- name: photo_2
dtype: image
- name: photo_3
dtype: image
- name: photo_4
dtype: image
- name: worker_id
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
- name: sex
dtype: string
splits:
- name: train
num_bytes: 341007536
num_examples: 10
download_size: 100871449
dataset_size: 341007536
---
# Face Mask Detection
Dataset includes 250 000 images, 4 types of mask worn on 28 000 unique faces. All images were collected using the Toloka.ai crowdsourcing service and validated by TrainingData.pro
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=face_masks) to discuss your requirements, learn about the price and buy the dataset.
# File with the extension .csv
includes the following information for each media file:
- **WorkerId**: the identifier of the person who provided the media file,
- **Country**: the country of origin of the person,
- **Age**: the age of the person,
- **Sex**: the gender of the person,
- **Type**: the type of media file
- **Link**: the URL to access the media file
# Folder "img" with media files
- containg all the photos
which correspond to the data in the .csv file
**How it works**: *go to the first folder and you will make sure that it contains media files taken by a person whose parameters are specified in the first 4 lines of the .csv file.*
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=face_masks) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
crumb/gpt4all-clean | 2023-04-28T21:47:38.000Z | [
"task_categories:conversational",
"language:en",
"license:mit",
"region:us"
] | crumb | null | null | null | 8 | 3 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 608770781
num_examples: 374269
download_size: 0
dataset_size: 608770781
license: mit
task_categories:
- conversational
language:
- en
---
# Dataset Card for "GPT4All-Clean"
The GPT4All-Clean dataset is a modified version of the original GPT4All dataset. It contains 374,269 examples, which are mostly converted to markdown format to improve consistency and compatibility with other datasets that use markdown formatting. The dataset is smaller than the original dataset, which has 437,604 examples, due to the removal of certain content. Specifically, all examples containing the phrase "As an AI language model" have been removed, as well as examples containing the string "html" to minimize potential confusion between real and non-real HTML code for the parser used to clean the examples. The intention behind these modifications is to enhance the dataset's overall quality, making it more suitable for use in research and applications. |
HelloImSteven/applescript-lines-annotated | 2023-05-01T03:30:28.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:en",
"license:mit",
"applescript",
"code",
"region:us"
] | HelloImSteven | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
- name: type
dtype: string
- name: intents
sequence: string
- name: tags
sequence: string
- name: description
dtype: string
- name: customTerms
sequence: string
- name: main_prompt
dtype: string
- name: other_prompts
sequence: string
splits:
- name: train
num_bytes: 345695.0
num_examples: 510
download_size: 123493
dataset_size: 345695.0
license: mit
task_categories:
- summarization
- text-generation
- text2text-generation
language:
- en
tags:
- applescript
- code
pretty_name: ASLines
size_categories:
- n<1K
---
# Dataset Card for "applescript-lines-annotated"
## Description
This is a dataset of single lines of AppleScript code scraped from GitHub and GitHub Gist and manually annotated with descriptions, intents, prompts, and other metadata.
## Content
Each row contains 8 features:
- `text` - The raw text of the AppleScript code.
- `source` - The name of the file from which the line originates.
- `type` - Either `compiled` (files using the `.scpt` extension) or `uncompiled` (everything else).
- `intents` - A list of intents the line invokes. See [Intents](#intents) for more info.
- `tags` - A list of tags associated with the line. See [Tags](#tags) for more info.
- `description` - One or more sentences describing what the line does, what its purpose is, and other relevant context.
- `customTerms` - A list of the custom terms used in the line, such as variable or handler names.
- `main_prompt` - A relevant prompt specific to the line.
- `other_prompts` - A list of prompts relevant to the line (but not necessarily specific to it).
### Intents
Intents describe the actions carried out by a line of code, i.e. what the line *does*. All intents used are listed below.
| Intent | Example Line |
| ----- | ----- |
| set property | `property myProperty: 5` |
| set variable | `set myVariable to 5` |
| begin handler definition | `on makePDF(title, content)` |
| end handler definition | `end makePDF` |
| call handler | `my makePDF("Example Title", "Example content") |
| perform action on script execution | `on run` |
| access value of property | `log myProperty` |
| access value of variable | `log myVariable` |
| get substring | `text 2 thru end of "Hello"` |
| concatenate strings | "Hello" & " world" |
| check condition | `if x > 4 then` |
| end condition | `end if` |
| begin instructions | `tell application "System Events"` |
| end instructions | `end tell` |
| interact with user interface | `click at {100, 200}` |
| pause | `delay 2` |
| begin error handling | `try` |
| end error handling | `end try` |
| perform action | `open location "https://google.com"` |
| begin repetition | `repeat with i from 1 thru 5` |
| end repetition | `end repeat` |
| filter list | `set t to tracks whose unplayed is true` |
| return | `return 5` |
| import library | `use framework "Foundation"` |
| display UI element | `display dialog "Test"` |
| open file | `set f to open for access filePath` |
| close file | `close access f` |
| begin script definition | `script myScript` |
| end script definition | `end script` |
| declare variable | `local x, y` |
| handle error | `on error err` |
### Tags
Tags described what a line *is* or what it *contains*. All tags used are listed below.
- contains handler
- contains list
- contains property
- contains variable
- start of block
- complete statement
- contains raw text
- contains location specifier
- contains condition
- contains number
- end of block
- contains boolean
- gui scripting
- contains comment
- contains cast
- AsOBjC
- shebang
- contains script object
- contains record
## Usage
This dataset was created for the AppleScript-Summarizer model as a personal project, but it can be used by others for any purpose. |
Maciel/FinCUGE-Instruction | 2023-08-20T02:26:39.000Z | [
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"finance",
"region:us"
] | Maciel | null | null | null | 3 | 3 | ---
license: apache-2.0
dataset_info:
features:
- name: task
dtype: string
- name: desc
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 62215078
num_examples: 123137
- name: eval
num_bytes: 7548859
num_examples: 15167
download_size: 32078572
dataset_size: 69763937
task_categories:
- question-answering
language:
- zh
tags:
- finance
pretty_name: s
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
## Dataset Description
本数据集包含八项中文金融自然语言处理基准任务,分别为金融新闻摘要(FinNA)、金融新闻公告事件问答(FinQA)、金融新闻分类(FinNL)、金融新闻关系抽取(FinRE)、金融社交媒体文本情绪分类(FinNE)、金融负面消息及其主体判定(FinNSP)、金融因果事件抽取(FinCQA)、金融事件主体抽取(FinESE)。

## Dataset Structure
(1)FinNA
金融新闻摘要数据集。输入一段金融新闻,需要模型生成一句话摘要。其中训练集包含24000条数据,验证集包含3000条数据。
```json
{
"instruction": "根据以下新闻生成摘要。",
"input": "天宇股份公告,预计2021年半年度归属于上公司股东的净利润1.7亿元-2.3亿元,同比下降39.68%-55.41%。公司主营产品沙坦类原料药受低端市场激烈竞争影响,原料药销售价格较去年同期下降;子公司山东昌邑一期项目和京圣药业生产基地建设完成,进入试生产阶段和达产阶段,产能利用率没有完全释放,生产成本阶段性较高等原因导致报告期毛利率较上年同期下降。",
"output": "天宇股份:半年度净利润预降40%-55%"
}
```
(2)FinQA
金融新闻公告事件问答数据集。由DuEE-fin数据集转化得到。输入一段金融新闻或公告,和一个与文本中发生的事件有关的问题,需要模型根据文本生成问题答案。问题的范围包含文本中包含的事件类型,以及某个事件对应的发生时间和人物等要素;答案为问题对应的文本中的事件类型或事件要素的列表。其中训练集包含16000条数据,验证集包含2000条数据。
```json
{
"instruction": "新城悦服务股份回购事件对应的每股交易价格是什么?原标题:新城悦“自救”:1064万港元回购公司190万股股份 来源:新浪乐居 \
7月8日,新城悦服务(01755.hk)发布公告称,公司于今日回购190万股普通股票,占据现有已发行股份的0.23171%。回购股份每股付出价格区间为5.30港元至5.83港元,付出总额为1064万港元。 \
值得注意的是,新城控股(28.500,1.52,5.63%)董事长涉嫌猥亵儿童被刑拘事件发生后第四个交易日(7月8日),新城悦服务股价开始回升,收涨12.20%。 \
据悉,新城控股董事长涉嫌猥亵儿童被刑拘事件发生第三个交易日(7月5日),新城系港股上市房企市值共蒸发约256亿港元。截至7月5日收盘,新城发展(01030.HK)收于6.71港元\/股,市值自事件发生后减少227.11亿港元;新城悦(01755.HK)收于5.08港元\/股,市值自事件发生后减少28.86亿港元。",
"input": "",
"output": "5.30港元至5.83港元"
}
```
(3)FinNL
金融新闻分类数据集。对于给出的金融新闻,需要模型将其多标签分类到可能的十五种类别,类别包括公司、行业、大盘、国际、经济、政策、政治、期货、债券、房地产、外汇、虚拟货币、新冠、能源和其它。其中训练集包含8000条数据,验证集包含1000条数据。
```json
{
"instruction": "新城悦服务股份回购事件对应的每股交易价格是什么?原标题:新城悦“自救”:1064万港元回购公司190万股股份 来源:新浪乐居 \
7月8日,新城悦服务(01755.hk)发布公告称,公司于今日回购190万股普通股票,占据现有已发行股份的0.23171%。回购股份每股付出价格区间为5.30港元至5.83港元,付出总额为1064万港元。 \
值得注意的是,新城控股(28.500,1.52,5.63%)董事长涉嫌猥亵儿童被刑拘事件发生后第四个交易日(7月8日),新城悦服务股价开始回升,收涨12.20%。 \
据悉,新城控股董事长涉嫌猥亵儿童被刑拘事件发生第三个交易日(7月5日),新城系港股上市房企市值共蒸发约256亿港元。截至7月5日收盘,新城发展(01030.HK)收于6.71港元\/股,市值自事件发生后减少227.11亿港元;新城悦(01755.HK)收于5.08港元\/股,市值自事件发生后减少28.86亿港元。",
"input": "",
"output": "5.30港元至5.83港元"
}
```
(4)FinRE
金融新闻关系抽取数据集。对于给出的金融新闻和头实体-尾实体对,需要模型分类实体对的关系到包含空关系的44种关系类别,包含拥有、持股、竞争、收购、交易、合作、减持等财经金融领域的特有关系类别。其中训练集包含7454条数据,验证集包含1489条数据。
```json
{
"instruction": "根据以下文本,描述以下两个实体东方航空和上航之间的关系。",
"input": "东方航空AH股临时停牌传将与上航合并",
"output": "合并"
}
```
(5)FinFE
金融社交媒体文本情绪分类数据集。对于给出的金融社交媒体文本,需要模型分类该文本的情绪为消极-中性-积极三种类别。其中训练集包含8000条数据,验证集包含1000条数据。
```json
{
"instruction": "这个文本的情感倾向是积极、消极还是中性的。",
"input": "补仓补的心力憔悴",
"output": "该文本情感是消极的。"
}
```
(6)FinNSP
金融负面消息及其主体判定数据集。对于给出的金融新闻或社交媒体文本及其中包含的实体,需要模型判断该文本中是否包含有针对某个实体的负面消息,并指出负面消息的主体是哪个实体。其中训练集包含4800条数据,验证集包含600条数据。
```json
{
"instruction": "识别以下内容中的负面金融实体信息。",
"input": "11月5日,警方对众车在线涉嫌非法吸收公众存款案立案侦查",
"output": "文中包含的负面主体:众车在线。"
}
```
(7)FinCQA
金融因果事件抽取数据集。因果事件抽取专注于在文本中识别出具有因果关系的两个事件及其事件参数,并将其整理为机构化数据。该数据集的因果事件数据集包含对大宗商品领域的因果事件识别,识别的事件类型包括台风/地震,供给增加/减少,需求增加/减少,价格上升/下降等可能为原因和结果事件及其对应关系和对应的产品、地区等参数 。其中训练集包含21965条数据,验证集包含2741条数据。
```json
{
"instruction": "下文中有哪些因果事件?铁矿:中长期,今年铁矿供需格局明显改善,巴西矿难及飓风对发运的影响,导致铁矿石全年供应走低",
"input": "",
"output": "台风导致供给减少"
}
```
(8)FinESE
金融事件主体抽取数据集。从真实的新闻语料中,抽取特定事件类型的主体。其中训练集包含11752条数据,验证集包含1464条数据。
```json
{
"instruction": "分析以下新闻,抽取资金账户风险事件相关的主体信息。",
"input": "金一文化违规减持仅””罚酒三杯””未来减持或””仍不手软””雅虎承认发生大规模数据泄露 2亿账户信息被盗科远股份(002380)股东减持202万股套现5989万",
"output": "所属资金账户风险事件的金融主体是雅虎。"
}
``` |
sivan22/hebrew-handwritten-characters | 2023-04-29T22:13:17.000Z | [
"license:cc-by-3.0",
"region:us"
] | sivan22 | null | null | null | 0 | 3 | ---
license: cc-by-3.0
---
# Dataset Information
## Keywords
Hebrew, handwritten, letters
## Description
HDD_v0 consists of images of isolated Hebrew characters together with training and test sets subdivision.
The images were collected from hand-filled forms.
For more details, please refer to [1].
When using this dataset in research work, please cite [1].
[1] I. Rabaev, B. Kurar Barakat, A. Churkin and J. El-Sana. The HHD Dataset. The 17th International Conference on Frontiers in Handwriting Recognition, pp. 228-233, 2020.
## Technical Details
The dataset is divided into TRAIN and TEST set (folders), each one containing 27 subfolders.
Each subfolder contains the images of a letter from the alphabet (one subfolder for each letter of the alphabet).
Train set contains 3965 samples, test set contains 1134 samples. |
KauPage/SVM | 2023-05-10T03:07:11.000Z | [
"task_categories:automatic-speech-recognition",
"language:mr-",
"license:cc0-1.0",
"license:other",
"region:us"
] | KauPage | A medium-scale marathi speech corpus for representation learning, semi-supervised learning and interpretation focused on Gurudev's sermons. | @inproceedings{kpage-github,
title = "{S}{V}{M}: A medium-scale Marathi Speech Corpus for Representation Learning,
Semi-Supervised Learning and Interpretation",
author = "Kaustubh Page",
booktitle = "2022 and 2023 Zoom Pravachan",
month = dec,
year = "2022",
publisher = "Kaustubh Page",
url = "",
doi = "",
pages = "200",
} | null | 0 | 3 | ---
annotations_creators: []
language:
- mr-
language_creators: []
license:
- cc0-1.0
- other
pretty_name: SVM
source_datasets: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for Voxpopuli
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** Kpage
- **Repository:** Kpage
- **Paper:**
- **Point of Contact:**
### Dataset Summary
SVM is a test dataset
### Example usage
SVM has one language. To load a specific language pass its name as a config name:
```python
from datasets import load_dataset
dataset = load_dataset(""KauPage/SVM", "mr-IN",)
```
```
**Note that L2 English subset contains only `test` split.**
### Supported Tasks and Leaderboards
* automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
SVM contains labelled (transcribed) data for 1 language:
| Language | Code | Transcribed Hours | Transcribed Speakers | Transcribed Tokens |
|:---:|:---:|:---:|:---:|:---:|
| Marathi | mr-IN | 1 | 1 | 4.8M |
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'mrt_gurudev_10Dec22_0001',
'language': 11, # "hr"
'audio': {
'path': '/home/marathi/.cache/huggingface/datasets/downloads/extracted/44aedc80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c510e/train_part_0/mrt_gurudev_10Dec22_0001.wav',
'array': array([-0.01434326, -0.01055908, 0.00106812, ..., 0.00646973], dtype=float32),
'sampling_rate': 16000
},
'raw_text': '',
'normalized_text': 'poast genitalnog sakaenja ena u europi tek je jedna od manifestacija takve tetne politike.'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `language` (datasets.ClassLabel) - numerical id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `raw_text` (string) - original (orthographic) audio segment text
* `normalized_text` (string) - normalized audio segment transcription
### Data Splits
All configs (languages) except for accented English contain data in three splits: train, validation and test. A
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home)
#### Initial Data Collection and Normalization
### Dataset Curators
[More Information Needed]
``` |
egecandrsn/weatherdata | 2023-04-30T06:14:55.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"region:us"
] | egecandrsn | null | null | null | 0 | 3 | ---
license: unknown
language:
- en
size_categories:
- 1K<n<10K
---
# Weather Dataset README
## Overview
This dataset contains weather data for Ankara, Turkey, from 2016-04-01 to 2022-04-01. The dataset is composed of weather-related measurements and information, such as temperature, precipitation, wind speed, and other relevant parameters.
## Dataset Description
Each row in the dataset represents a single day's weather data. The columns in the dataset are as follows:
- **name** (string): Name of the location (Ankara)
- **datetime** (string): Date in the format "YYYY-MM-DD"
- **tempmax** (float64): Maximum temperature in Celsius
- **tempmin** (float64): Minimum temperature in Celsius
- **temp** (float64): Average temperature in Celsius
- **feelslikemax** (float64): Maximum "feels like" temperature in Celsius
- **feelslikemin** (float64): Minimum "feels like" temperature in Celsius
- **feelslike** (float64): Average "feels like" temperature in Celsius
- **dew** (float64): Dew point temperature in Celsius
- **humidity** (float64): Humidity percentage
- **precip** (float64): Precipitation amount in millimeters
- **precipprob** (int64): Precipitation probability percentage
- **precipcover** (float64): Precipitation coverage percentage
- **preciptype** (null): Precipitation type (should be null in the dataset, otherwise an error)
- **snow** (float64): Snowfall amount in centimeters
- **snowdepth** (float64): Snow depth in centimeters
- **windgust** (float64): Maximum wind gust speed in kilometers per hour
- **windspeed** (float64): Average wind speed in kilometers per hour
- **winddir** (float64): Wind direction in degrees (0-360)
- **sealevelpressure** (float64): Sea-level pressure in millibars
- **cloudcover** (float64): Cloud coverage percentage
- **visibility** (float64): Visibility distance in kilometers
- **solarradiation** (float64): Solar radiation in Watts per square meter
- **solarenergy** (float64): Solar energy in kilojoules per square meter
- **uvindex** (int64): UV index value
- **severerisk** (float64): Severe weather risk percentage
- **sunrise** (string): Sunrise time in the format "YYYY-MM-DDTHH:mm:ss"
- **sunset** (string): Sunset time in the format "YYYY-MM-DDTHH:mm:ss"
- **moonphase** (float64): Moon phase value (0 to 1)
- **conditions** (string): General weather conditions
- **description** (string): Detailed weather description
- **icon** (string): Weather icon identifier
- **stations** (string): Comma-separated list of weather station IDs
## Notes
Please note that there are some errors in the dataset, such as non-null values in the "preciptype" column. Be sure to handle these cases appropriately when processing the data. |
julia-lukasiewicz-pater/small-GPT-wiki-intro-features | 2023-06-11T14:42:23.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"region:us"
] | julia-lukasiewicz-pater | null | null | null | 0 | 3 | ---
license: cc
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
---
# Small-GPT-wiki-intro-features dataset
This dataset is based on [aadityaubhat/GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro).
It contains 100k randomly selected texts (50k from Wikipedia and 50k generated by ChatGPT).
For each text, various complexity measures were calculated, including e.g. readibility, lexical richness etc.
It can be used for text classification or analysis of linguistic features of human-generated and ChatGPT-generated texts.
## Dataset structure
Features were calculated using various Python libraries, i.e. NLTK, [readability-metrics](https://pypi.org/project/py-readability-metrics/), [lexical-diversity](https://pypi.org/project/lexical-diversity/),
and [TextDescriptives](https://hlasse.github.io/TextDescriptives/). The list of all features and their corresponding sources can be found below:
| Column | Description |
| ------ | ----------- |
| text | human- or ChatGPT-generated text; taken from aadityaubhat/GPT-wiki-intro |
| normalized_bigram_entropy | bigram entropy normalized with estimated maximum entropy; nltk |
| mean_word_length | mean word length; nltk |
| mean_sent_length | mean sentence length; nltk |
| fog | Gunning-Fog; readability-metrics |
| ari | Automated Readability Index; readability-metrics |
| dale_chall | Dale Chall Readability; readability-metrics |
| hdd | Hypergeometric Distribution; lexical-diversity |
| mtld | Measure of lexical textual diversity; lexical-diversity |
| mattr | Moving average type-token ratio; lexical-diversity |
| number_of_ADJ | proportion of adjectives per word; nltk |
| number_of_ADP | proportion of adpositions per word; nltk |
| number_of_ADV | proportion of adverbs per word; nltk |
| number_of_CONJ | proportion of conjunctions per word; nltk |
| number_of_DET | proportion of determiners per word; nltk |
| number_of_NOUN | proportion of nouns per word; nltk |
| number_of_NUM | proportion of numerals per word; nltk |
| number_of_PRT | proportion of particles per word; nltk |
| number_of_PRON | proportion of pronuns per word; nltk |
| number_of_VERB | proportion of verbs per word; nltk |
| number_of_DOT | proportion of punctuation marks per word; nltk |
| number_of_X | proportion of POS tag 'Other' per word; nltk |
| class | binary class, 0 stands for Wikipedia, 1 stands for ChatGPT |
| spacy_perplexity | text perplexity; TextDescriptives |
| entropy | text entropy; TextDescriptives |
| automated_readability_index | Automated Readability Index; TextDescriptives |
| per_word_spacy_perplexity | text perplexity per word; TextDescriptives |
| dependency_distance_mean | mean distance from each token to their dependent; TextDescriptives |
| dependency_distance_std | standard deviation of distance from each token to their dependent; TextDescriptives |
| first_order_coherence | cosine similarity between consecutive sentences; TextDescriptives |
| second_order_coherence | cosine similarity between sentences that are two sentences apart; TextDescriptives |
| smog |SMOG; TextDescriptives |
| prop_adjacent_dependency_relation_mean | mean proportion adjacent dependency relations; TextDescriptives |
| prop_adjacent_dependency_relation_std | standard deviation of proportion adjacent dependency relations; TextDescriptives |
| syllables_per_token_mean | mean of syllables per token; TextDescriptives |
| syllables_per_token_median | median of syllables per token; TextDescriptives |
| token_length_std | standard deviation of token length; TextDescriptives |
| token_length_median | median of token length; TextDescriptives |
| sentence_length_median | median of sentence length; TextDescriptives |
| syllables_per_token_std | standard deviation of syllables per token; TextDescriptives |
| proportion_unique_tokens | proportion of unique tokens; TextDescriptives |
| top_ngram_chr_fraction_3 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| top_ngram_chr_fraction_2 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| top_ngram_chr_fraction_4 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| proportion_bullet_points | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| flesch_reading_ease | Flesch Reading ease ; TextDescriptives |
| flesch_kincaid_grade | Flesch Kincaid grade; TextDescriptives |
| gunning_fog | Gunning-Fog; TextDescriptives |
| coleman_liau_index | Coleman-Liau Index; TextDescriptives |
| oov_ratio| out-of-vocabulary ratio; TextDescriptives |
## Code
Code that was used to generate this dataset can be found on [Github](https://github.com/julia-lukasiewicz-pater/gpt-wiki-features/tree/main).
|
elonmuskceo/parquet-fruits | 2023-05-01T12:49:44.000Z | [
"license:apache-2.0",
"region:us"
] | elonmuskceo | null | null | null | 1 | 3 | ---
license: apache-2.0
---
Generated from https://github.com/ironSource/parquetjs |
dariolopez/ms-marco-es-500k | 2023-05-01T16:12:05.000Z | [
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:es",
"license:apache-2.0",
"region:us"
] | dariolopez | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: query
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
splits:
- name: train
num_bytes: 433633520
num_examples: 500000
download_size: 170119229
dataset_size: 433633520
license: apache-2.0
task_categories:
- question-answering
language:
- es
size_categories:
- 100K<n<1M
---
# Dataset Card for "ms-marco-es-500k"
QA asymmetric Spanish dataset filtered from [multilingual version of MS Marco](https://huggingface.co/datasets/unicamp-dl/mmarco) and sampled on 500k rows.
```python
import datasets
ms_marco_es = datasets.load_dataset('unicamp-dl/mmarco', name='spanish', split='train')
ms_marco_es.select(range(500_000)).push_to_hub("dariolopez/ms-marco-es-500k", token=os.environ['hg_token'])
``` |
OdiaGenAI/Odia_Alpaca_instructions_52k | 2023-05-05T20:46:42.000Z | [
"size_categories:10K<n<100K",
"language:or",
"license:cc-by-nc-sa-4.0",
"region:us"
] | OdiaGenAI | null | null | null | 0 | 3 | ---
license: cc-by-nc-sa-4.0
language:
- or
pretty_name: Odia_Alpaca_Instruction_52K
size_categories:
- 10K<n<100K
---
# Dataset Card for Odia_Alpaca_Instruction_52K
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/shantipriyap/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset is the Odia-translated version of Alpaca 52K instruction set. In this dataset both English and Odia instruction, input, and output strings are available.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
### Data Fields
instruction (string)
english_instruction (string)
input (string)
english_input (string)
output (string)
english_output (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar |
benlipkin/folio | 2023-05-02T16:44:40.000Z | [
"task_categories:text-classification",
"language:en",
"license:cc",
"arxiv:2209.00840",
"region:us"
] | benlipkin | null | null | null | 0 | 3 | ---
license: cc
task_categories:
- text-classification
language:
- en
---
```
@article{han2022folio,
title={FOLIO: Natural Language Reasoning with First-Order Logic},
author = {Han, Simeng and Schoelkopf, Hailey and Zhao, Yilun and Qi, Zhenting and Riddell, Martin and Benson, Luke and Sun, Lucy and Zubova, Ekaterina and Qiao, Yujie and Burtell, Matthew and Peng, David and Fan, Jonathan and Liu, Yixin and Wong, Brian and Sailor, Malcolm and Ni, Ansong and Nan, Linyong and Kasai, Jungo and Yu, Tao and Zhang, Rui and Joty, Shafiq and Fabbri, Alexander R. and Kryscinski, Wojciech and Lin, Xi Victoria and Xiong, Caiming and Radev, Dragomir},
journal={arXiv preprint arXiv:2209.00840},
url = {https://arxiv.org/abs/2209.00840},
year={2022}
``` |
ImageIN/ImageIn_annotations_resized_images | 2023-05-03T09:33:22.000Z | [
"task_categories:image-classification",
"region:us"
] | ImageIN | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: choice
dtype: string
- name: loaded_image
dtype: image
splits:
- name: train
num_bytes: 172997430.0
num_examples: 1896
download_size: 172992244
dataset_size: 172997430.0
task_categories:
- image-classification
---
# Dataset Card for ImageIn_annotations_resized_images
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk/gpteacher-instruct-chatml | 2023-07-24T20:20:41.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 2 | 3 | ---
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: do_train
dtype: bool
- name: role
dtype: string
splits:
- name: train
num_bytes: 11767161
num_examples: 18194
download_size: 0
dataset_size: 11767161
---
# Dataset Card for "gpteacher-instruct-chatml"
Data preprocessing pipeline: https://github.com/AlekseyKorshuk/chat-data-pipeline |
akumoth/peewee-issues | 2023-05-03T15:53:06.000Z | [
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_ids:topic-classification",
"task_ids:multi-label-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language... | akumoth | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: 'null'
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
dtype: 'null'
- name: assignees
sequence: 'null'
- name: milestone
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: string
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
splits:
- name: train
num_bytes: 9990717
num_examples: 2814
download_size: 3607838
dataset_size: 9990717
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: Peewee Github Issues
size_categories:
- n<1K
source_datasets:
- original
tags:
- peewee
- python
- github
- issues
task_categories:
- text-classification
- feature-extraction
task_ids:
- topic-classification
- multi-label-classification
---
# Dataset Card for Peewee Issues
## Dataset Summary
Peewee Issues is a dataset containing all the issues in the [Peewee github repository](https://github.com/coleifer/peewee) up to the last date of extraction (5/3/2023). It has been made for educational purposes in mind (especifically, to get me used to using Hugging Face's datasets), but can be used for multi-label classification or semantic search. The contents are all in English and concern SQL databases and ORM libraries. |
LEAP/ClimSim_low-res | 2023-09-29T20:31:55.000Z | [
"license:cc-by-4.0",
"arxiv:2306.08754",
"doi:10.57967/hf/0740",
"region:us"
] | LEAP | null | null | null | 1 | 3 | ---
license: cc-by-4.0
---
Corresponding GitHub repo can be found here:
https://github.com/leap-stc/ClimSim
Read more: https://arxiv.org/abs/2306.08754. |
feradauto/NLP4SGPapers | 2023-05-03T17:37:12.000Z | [
"task_categories:text-classification",
"license:cc-by-nc-sa-4.0",
"region:us"
] | feradauto | NLP4SGPAPERS dataset: a scientific dataset with three associated tasks that can help identify NLP4SG papers | null | 1 | 3 | ---
license: cc-by-nc-sa-4.0
pretty_name: NLP4SGPapers
task_categories:
- text-classification
---
# Dataset Card for NLP4SGPapers
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [NLP4SG](https://github.com/feradauto/nlp4sg)
- **Paper:**
- **Point of Contact:** [Zhijing Jin](mailto:zjin@tue.mpg.de), [Fernando Gonzalez](mailto:fgonzalez@ethz.ch)
### Dataset Summary
Scientific dataset with three associated tasks that can help identify NLP4SG papers.
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
Each instance is an annotated paper with title, abstract, year.
### Data Fields
- `ID`: Paper ID in ACL Anthology
- `url`: URL where the paper is available
- `title`: Title of the paper
- `abstract`: Abstract
- `label_nlp4sg`: Whether is an NLP4SG paper or not. For more info on the criteria check our paper
- `task`: List of tasks (Only available for the test set and for SG papers)
- `method`: List of methods (Only available for the test set and for SG papers)
- `goal1`: goal in string format
- `goal2`: goal in string format
- `goal3`: goal in string format
- `acknowledgments`: acknowledgments
- `year`: Year of publication
- `sdg1` to `sdg17`: Boolean value that indicates if the paper addresses the United Nations Social Development Goal.
### Data Splits
NLP4SGPapers contains train, test and validation splits.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
Information about the data collection can be found in the appendix of [our paper].
### Personal and Sensitive Information
The NLP4SGPapers dataset does not have privacy concerns.
## Considerations for Using the Data
### Social Impact of Dataset
The intended use of this work is to help the creation of an overview of the NLP4SG research landscape.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The NLP4SGPapers dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```
``` | |
glombardo/misogynistic-statements-and-their-potential-restructuring | 2023-05-28T17:56:43.000Z | [
"task_categories:text2text-generation",
"task_categories:text-classification",
"size_categories:n<1K",
"language:es",
"license:cc-by-nc-4.0",
"region:us"
] | glombardo | null | null | null | 0 | 3 | ---
license: cc-by-nc-4.0
task_categories:
- text2text-generation
- text-classification
language:
- es
pretty_name: Misogynistic statements and their potential restructuring
size_categories:
- n<1K
dataset_info:
features:
- name: misogynistic
dtype: string
- name: reformulation
dtype: string
splits:
- name: train
num_bytes: 24000
num_examples: 121
- name: validation
num_bytes: 8253
num_examples: 41
- name: test
num_bytes: 8346
num_examples: 41
download_size: 28877
dataset_size: 40599
---
## Misogynistic statements and their potential restructuring
Beta dataset
Generated by GPT3.5
Language: Spanish |
wanicca/WikiHowQA-mnbvc | 2023-09-04T06:18:28.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:mit",
"region:us"
] | wanicca | null | null | null | 5 | 3 | ---
license: mit
task_categories:
- question-answering
language:
- en
- zh
size_categories:
- 10K<n<100K
---
从WikiHow页面抽取的中文/英文问答数据
相关项目: [MNBVC](https://github.com/esbatmop/MNBVC)
抽取工具代码:[WikiHowQAExtractor](https://github.com/wanicca/WikiHowQAExtractor) |
NicholasSynovic/Modified-VEAA | 2023-05-03T18:04:48.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:agpl-3.0",
"region:us"
] | NicholasSynovic | null | null | null | 0 | 3 | ---
license: agpl-3.0
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
---
# Modified Victorian Era Authorship Attribution Dataset
## About
This data set is a modified version of the one that can be found [here](https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution).
The difference being that the training dataset was split into two parts: 80% training, 20% testing with labels.
Splitting was done with a random stratified sample approach.
This is different than the source dataset which did not have any labels for the testing data.
Additionally, all text has been converted to UTF-8 format and any errors were ignored.
The original testing data is not included with this release.
## Citation
> GUNGOR, ABDULMECIT, Benchmarking Authorship Attribution Techniques Using Over A Thousand Books by Fifty Victorian Era Novelists, Purdue Master of Thesis, 2018-04 |
TempoFunk/map | 2023-05-11T17:30:01.000Z | [
"task_categories:text-to-image",
"task_categories:text-to-video",
"task_categories:video-classification",
"task_categories:image-classification",
"size_categories:1M<n<10M",
"language:en",
"license:agpl-3.0",
"region:us"
] | TempoFunk | null | null | null | 1 | 3 | ---
license: agpl-3.0
language:
- en
task_categories:
- text-to-image
- text-to-video
- video-classification
- image-classification
size_categories:
- 1M<n<10M
---
# MAP
An SQLite database of video urls and captions/descriptions. |
tafseer-nayeem/review_helpfulness_prediction | 2023-08-28T21:56:01.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-sa-4.0",
"Human-Centered NLP",
"Helpfulness Prediction",
"Review Helpfulness Prediction",
"User Review Analysis",
"Dataset",
"Review Helpfulness Prediction Dataset",
"doi:10.57967/hf/0613",
"re... | tafseer-nayeem | null | null | null | 0 | 3 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- en
tags:
- Human-Centered NLP
- Helpfulness Prediction
- Review Helpfulness Prediction
- User Review Analysis
- Dataset
- Review Helpfulness Prediction Dataset
pretty_name: Review Helpfulness Prediction (RHP) Dataset
size_categories:
- 100K<n<1M
---
# Dataset Card for Review Helpfulness Prediction (RHP) Dataset
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction](https://aclanthology.org/2023.findings-eacl.125/)
- **Leaderboard:**
### Dataset Summary
The success of e-commerce services is largely dependent on helpful reviews that aid customers in making informed purchasing decisions. However, some reviews may be spammy or biased, making it challenging to identify which ones are helpful. Current methods for identifying helpful reviews only focus on the review text, ignoring the importance of who posted the review and when it was posted. Additionally, helpfulness votes may be scarce for less popular products or recently submitted reviews. To address these challenges, the we introduce a dataset and task for review helpfulness prediction, incorporating the reviewers' attributes and review date, and build the dataset by scraping reviews from [TripAdvisor](https://www.tripadvisor.com/).
### Languages
English
## Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("tafseer-nayeem/review_helpfulness_prediction")
# Divide the dataset into train, test, and validation sets
train_dataset = dataset["train"]
test_dataset = dataset["test"]
validation_dataset = dataset["validation"]
print(f'Number of training samples: {len(train_dataset)}')
print(f'Number of testing samples: {len(test_dataset)}')
print(f'Number of validation samples: {len(validation_dataset)}')
```
**If the above code doesn't work due to changes in the Hugging Face datasets library**, download the `train.json`, `test.json`, and `validation.json` from the data directory and use the following alternative code:
```python
import json
def load_json(filename):
with open(filename, 'r') as f:
data = json.load(f)
return data
# Load the data
train_data = load_json('train.json')
test_data = load_json('test.json')
validation_data = load_json('validation.json')
```
## Dataset Structure
### Data Instances
One example from the `test` split of the dataset is given below in JSON format.
```
{
"user_review_posted": 28,
"user_total_helpful_votes": 78,
"expertise": 0.013414038240254,
"user_cities_visited": 89,
"review_days": 0.39430449069003204,
"helpful_class": 4,
"review_text": "Had to see for myself. Over priced, bloviated, cheap. I am highly sensitive to mold, and it permeated the hotel. Sheets were damp, pipes blew hot air even when turned off. Considering all the hype, that's what this place is, all hype for too much money."
}
```
### Data Fields
- `user_review_posted`: An integer representing the number of reviews posted by the reviewer.
- `user_total_helpful_votes`: An integer representing the cumulative helpful votes received by the reviewer.
- `expertise`: A normalized floating point number representing the mean number of helpful votes received per review.
- `user_cities_visited`: An integer representing the number of cities visited by the reviewer.
- `review_days`: A normalized floating point number representing the relative age of a review in days.
- `helpful_class`: An integer representing the degree of helpfulness of a review.
- `review_text`: A string representing the review text.
### Data Splits
The following Table presents the summary of our dataset with train, validation, and test splits.
| | Train | Valid | Test |
|:---------------:|---------|--------|-------|
| Total #Samples | 145,381 | 8,080 | 8,080 |
| Avg. #Sentences | 7.82 | 7.8 | 7.81 |
| Avg. #Words | 152.37 | 152.25 | 148.9 |
## Dataset Creation
We build our dataset by scraping reviews from [TripAdvisor](https://www.tripadvisor.com). Out of 225,664 reviews retrieved, close to one third have no helpful votes. We filter such reviews, and this reduces the number of reviews to 161,541. We leverage a logarithmic scale to categorize the reviews based on the number of votes received. Specifically, we map the number of votes into five intervals (i.e., [1,2), [2, 4), [4, 8), [8, 16), [16, infinity)), each corresponding to a helpfulness score of {1, 2, 3, 4, 5}, where the higher the score, the more helpful the review. More details can be found in our [EACL 2023](https://aclanthology.org/2023.findings-eacl.125/) paper.
### Discussion of Ethics
In our data scraping process, we took into account ethical considerations. We obtained data at an appropriate pace, avoiding any potential DDoS attacks.
### Known Limitations
Limitation of our dataset is that we only worked with reviews written in English. As a result, we filter out the reviews written in other languages and notice code-switched reviews when the reviewers alternate between two or more languages in a single review.
## Additional Information
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the resources or it's relevant to your work, please cite [the paper](https://aclanthology.org/2023.findings-eacl.125/).
```
@inproceedings{nayeem-rafiei-2023-role,
title = "On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction",
author = "Nayeem, Mir Tafseer and
Rafiei, Davood",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.125",
pages = "1684--1692",
abstract = "Helpful reviews have been essential for the success of e-commerce services, as they help customers make quick purchase decisions and benefit the merchants in their sales. While many reviews are informative, others provide little value and may contain spam, excessive appraisal, or unexpected biases. With the large volume of reviews and their uneven quality, the problem of detecting helpful reviews has drawn much attention lately. Existing methods for identifying helpful reviews primarily focus on review text and ignore the two key factors of (1) who post the reviews and (2) when the reviews are posted. Moreover, the helpfulness votes suffer from scarcity for less popular products and recently submitted (a.k.a., cold-start) reviews. To address these challenges, we introduce a dataset and develop a model that integrates the reviewer{'}s expertise, derived from the past review history of the reviewers, and the temporal dynamics of the reviews to automatically assess review helpfulness. We conduct experiments on our dataset to demonstrate the effectiveness of incorporating these factors and report improved results compared to several well-established baselines.",
}
``` |
seanghay/khPOS | 2023-05-08T07:58:27.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:km",
"license:cc-by-nc-sa-4.0",
"region:us"
] | seanghay | The khPOS Corpus (Khmer POS Corpus) is a 12,000 sentences (25,626 words) manually word segmented and POS tagged corpus developed for Khmer language NLP research and developments. We collected Khmer sentences from websites that include various area such as economics, news, politics. Moreover it is also contained some student list and voter list of national election committee of Cambodia. The average number of words per sentence in the whole corpus is 10.75. Here, some symbols such as "។" (Khmer sign Khan), "៖" (Khmer sign Camnuc pii kuuh), "-", "?", "[", "]" etc. also counted as words. The shotest sentence contained only 1 word and longest sentence contained 169 words as follows (here, line number : Khmer sentence):
1814 : " ម៉ែ ឥត មាន ស្អប់_ខ្ពើម ឪពុក កូន ឯង ទេ ម៉ែ តែង នឹក មក កូន នឹង ឪពុក ឯង ពុំ មាន ភ្លេច ព្រម_ទាំង អ្នក~ភូមិ ផង របង ជាមួយ ឯង ទៀត ដែល ម្ដាយ ធ្លាប់ នៅ ជាមួយ គេ ប៉ុន្តែ ម៉ែ ជាតិ ជា ទេព_ធីតា ពុំ អាច នៅ ជាមួយ មនុស្ស_លោក បាន យូរ ទេ រាល់ ថ្ងៃ ម៉ែ តែង ទៅ បំពេញ កិច្ច នៅ ចំពោះ មុខ ព្រះ~ភក្ត្រ ព្រះ~ឥន្ទ្រាធិរាជ គឺ សុំ អង្វរ ឲ្យ ព្រះ~អង្គ ប្រទាន ពរ ដល់ កូន ឯង និង ឪពុក កូន ឯង កុំ បី ខាន មិន តែ ប៉ុណ្ណោះ ម្ដាយ បាន ទាំង ទូល សុំ ព្រះ~ឥន្ទ្រ ឲ្យ ព្រះ~អង្គ មេត្តា ផ្សាយ នូវ សុភ_មង្គល ដល់ មនុស្ស នៅ ឋាន នេះ ទូទៅ ផង កូន_ប្រុស ពន្លក ម្ដាយ ! ម្ដាយ ពុំ អាច នៅ ជាមួយ_នឹង កូន បាន ទៀត តែ ម្ដាយ យក កូន ឯង ទៅ លេង ប្រាសាទ ម្ដាយ ឯ ឋាន លើ មួយ ដង ម្ដាយ នឹង នាំ កូន ឯង ទៅ មុជ_ទឹក ក្នុង អាង ក្រអូប នៅ_ក្នុង សួន ព្រះ~ឥន្ទ្រ ហើយ ទឹក នោះ នឹង ជម្រះ កាយ កូន ឯង ឲ្យ បាត់ ធំ ក្លិន មនុស្ស_លោក បន្ទាប់_ពី នោះ មក ម្ដាយ នឹង នាំ កូន ឯង ចូល ទៅ_ក្នុង ប្រាសាទ រួច នាំ កូន ឯង ទៅ ថ្វាយ_បង្រះ~ឥន្ទ្រ " ។ | Ye Kyaw Thu, Vichet Chea, Yoshinori Sagisaka, "Comparison of Six POS Tagging Methods on 12K Sentences Khmer Language POS Tagged Corpus", In the first Regional Conference on Optical character recognition and Natural language processing technologies for ASEAN languages (ONA 2017), December 7-8, 2017, Phnom Penh, Cambodia. | null | 0 | 3 | ---
license: cc-by-nc-sa-4.0
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': AB
'1': AUX
'2': CC
'3': CD
'4': DBL
'5': DT
'6': ETC
'7': IN
'8': JJ
'9': KAN
'10': M
'11': NN
'12': PA
'13': PN
'14': PRO
'15': QT
'16': RB
'17': RPN
'18': SYM
'19': UH
'20': VB
'21': VB_JJ
'22': VCOM
splits:
- name: train
num_bytes: 3569524
num_examples: 12000
download_size: 2372205
dataset_size: 3569524
task_categories:
- text-classification
- text-generation
language:
- km
pretty_name: Khmer Part-of-Speech Corpus for Khmer NLP Research and Developments
size_categories:
- 10K<n<100K
---
> I am not the author of this dataset. [View on GitHub](https://github.com/ye-kyaw-thu/khPOS).
# khPOS (draft released 1.0)
khPOS (Khmer Part-of-Speech) Corpus for Khmer NLP Research and Developments
## Lincense
Creative Commons Attribution-NonCommercial-Share Alike 4.0 International (CC BY-NC-SA 4.0) License
[Details Info of License](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Introduction
The khPOS Corpus (Khmer POS Corpus) is a 12,000 sentences (25,626 words) manually word segmented and POS tagged corpus developed for Khmer language NLP research and developments. We collected Khmer sentences from websites that include various area such as economics, news, politics. Moreover it is also contained some student list and voter list of national election committee of Cambodia. The average number of words per sentence in the whole corpus is 10.75. Here, some symbols such as "។" (Khmer sign Khan), "៖" (Khmer sign Camnuc pii kuuh), "-", "?", "\[", "\]" etc. also counted as words. The shotest sentence contained only 1 word and longest sentence contained 169 words as follows (here, line number : Khmer sentence):
1814 : " ម៉ែ ឥត មាន ស្អប់_ខ្ពើម ឪពុក កូន ឯង ទេ ម៉ែ តែង នឹក មក កូន នឹង ឪពុក ឯង ពុំ មាន ភ្លេច ព្រម_ទាំង អ្នក\~ភូមិ ផង របង ជាមួយ ឯង ទៀត ដែល ម្ដាយ ធ្លាប់ នៅ ជាមួយ គេ ប៉ុន្តែ ម៉ែ ជាតិ ជា ទេព_ធីតា ពុំ អាច នៅ ជាមួយ មនុស្ស_លោក បាន យូរ ទេ រាល់ ថ្ងៃ ម៉ែ តែង ទៅ បំពេញ កិច្ច នៅ ចំពោះ មុខ ព្រះ\~ភក្ត្រ ព្រះ\~ឥន្ទ្រាធិរាជ គឺ សុំ អង្វរ ឲ្យ ព្រះ\~អង្គ ប្រទាន ពរ ដល់ កូន ឯង និង ឪពុក កូន ឯង កុំ បី ខាន មិន តែ ប៉ុណ្ណោះ ម្ដាយ បាន ទាំង ទូល សុំ ព្រះ\~ឥន្ទ្រ ឲ្យ ព្រះ\~អង្គ មេត្តា ផ្សាយ នូវ សុភ_មង្គល ដល់ មនុស្ស នៅ ឋាន នេះ ទូទៅ ផង កូន_ប្រុស ពន្លក ម្ដាយ ! ម្ដាយ ពុំ អាច នៅ ជាមួយ_នឹង កូន បាន ទៀត តែ ម្ដាយ យក កូន ឯង ទៅ លេង ប្រាសាទ ម្ដាយ ឯ ឋាន លើ មួយ ដង ម្ដាយ នឹង នាំ កូន ឯង ទៅ មុជ_ទឹក ក្នុង អាង ក្រអូប នៅ_ក្នុង សួន ព្រះ\~ឥន្ទ្រ ហើយ ទឹក នោះ នឹង ជម្រះ កាយ កូន ឯង ឲ្យ បាត់ ធំ ក្លិន មនុស្ស_លោក បន្ទាប់_ពី នោះ មក ម្ដាយ នឹង នាំ កូន ឯង ចូល ទៅ_ក្នុង ប្រាសាទ រួច នាំ កូន ឯង ទៅ ថ្វាយ_បង្រះ\~ឥន្ទ្រ " ។
## Word Segmentation
In Khmer texts, words composed of single or multiple syllables are usually not separated by white space. Spaces are used for easier reading and generally put between phrases, but there are no clear rules for using spaces in Khmer language. Therefore, word segmentation is a necessary prerequisite for POS tagging. Four classes of segment (word) types were observed during the manual segmentation of the corpus of Khmer text, each representing a different type of word, these were:
- Word Type 1: Single Words
- Word Type 2: Compound Words
- Word Type 3: Compound Words with Prefix
- Word Type 4: Compound Words with Suffix
For the detail information of the word segmentation rules and how we built a Khmer word segmentation model, please refer to our published paper (see Publiation Section).
## POS Tags
Part of speech is a category to which a word is assigned in accordance with its syntactic functions. In Khmer grammatical system, many linguists has defined their own POS according to their trend of research. Even though, many books are published, there are no standard agreement yet especially on number and name of POS tags. Comparing to English language, some English POS are not used in Khmer language, such as gerund, comparative and superlative adjectives, particle, etc. Based on CHOUN NATH dictionary, Khmer POS Tag set is defined. Some new POS tags that are not defined in the dictionary are added for considering word disambiguation task. Unlike English grammar, some Khmer sentences consist of more than one verb.
The definitions and descriptions of POS tags are presented in detail as follow:
1. Abbreviation (AB): For example, គម or គ.ម for kilometer (km), អសប for United Nation (UN), ពស or ព.ស for ពុទ សក ជ (Buddhism era), នប or ន.ប for នគរ ល (police), អហ or អ.ហ for វុធហត (Police Military) etc.
2. Adjective is a word used to modify or describe the noun. Adjective is usually at the right hand side of noun. There are very few adjectives that their positions are before noun. ក្រហម (red), កន្លះ (half), ប្លែក (strange), តូច (small), ល្អ (good), ស្អាត (beautiful) etc.
3. Adverb (RB): An adverb is a word that is used to modify verb, adjective or another adverb. For example, ណាស់ (very), ពុំ (not), ទើប (just), ពេកក្រៃ (very), ហើយ (already) etc.
4. Auxiliary Verb (AUX): Only three groups of verbs are tagged as auxiliary verb that used to make tense.
- Past form: បាន or មាន + Verb
- Progressive form: កំពុង + Verb
- Future form: នឹង + Verb
5. Cardinal Number (CD): A cardinal number is a word or a number that denoting the quality. For example, បី (three), ១០០ (100), ចតុ (four), ពាន់ (thousand), លាន (million) etc.
6. Conjunction (CC): Conjunction is a word to connect between words, phrases, and sentences. ក៏ប៉ុន្តែ (but), ពីព្រោះ (because), ដ្បិត (for, since), ទម្រាំតែ (until), ពុំនោះសោត (otherwise), បើ (if) etc.
7. Currency (CUR): CUR for currency symbol such as: ៛, \$, ₤, € etc.
8. Determiner Pronoun (DT): In Khmer grammar, determiners are classified under pronoun unlike English. It is used to tell location or/and uncertainty of noun. They are equivalent to English words: this, that, those, these, all, every, each, some etc. For example, នេះ (this), នោះ (that), ទាំងនេះ (these), ទាំងអស់ (all), នានា (various), ខ្លះ (some), សព្វ (every) etc.
9. Double Sign (DBL): Double sign (ៗ) is used to remind reader to read the previous word twice. For example, មនុស្ស/NN (people) គ្រប់/DT (every) ៗ/DBL គ្នា/PRO (person), "everybody" in English.
10. Et Cetera (ETC): ។ល។ is equal to et cetera (etc.) in English.
11. Full Stop (KAN): There are two full stops in Khmer language, ។ for sentence and ៕ for paragraph.
12. Interjection (UH): Word represents sound of animal, machine, and surprised sound. Interjections are always at the beginning of a sentence, and mostly followed by exclamation mark. For example, អូ (Oh!), ម៉េវ (Meow), អ៊ុះ (uh) etc.
13. Measure Word (M): Measure Words are classified to describe different quality corresponding class of noun. Some of these words can not be found in English. For example: ព្រះសង្គ/NN (monk) ២/CD (2) អង្គ/M (person), សំលៀកបំពាក់/NN (cloth) ១/CD (1), សម្រាប់/M (set), ឆ្កែ/NN (dog) ១/CD (1) ក្បាល/M (head) etc.
14. Noun (NN): A noun is a word or compound word that identifies a person, an animal, an object, an idea, a thing, etc. For example: ឡាន (Car), ការអភិវឌ្ឍន៍ (Development), សកម្មភាព (Action), ខ្មៅដៃ (Pencil), ទឹកកក (Ice) etc.
15. Particle (PA): We consider three types of particle and they are hesitation, response and final. For the two medial particle words ក៏ ("so, then, but" in English) and នូវ ("of, with" in English) \[1\], we consider them as RB and IN.
- Hesitation Particle: ខ្ញុំ (I) គិត (think) …អ៊ើ/PA (Er. . .) មិន (not) ឃើញ (see), ("I er… don’t think so" in English)
- Response Particle: អើ/PA (Hm, Ah) ខ្ញុំ (I) ដឹង (know) ហើយ (already), ("Hmm I already know" in English)
- Final Particle: There are some final particles such as ណា៎, សិន and ចុះ. Example usage of ណា៎: កុំ/RB (don't) ភ្លេច/VB (forget) ណា៎/PA, ("Hmm don't forget!" in English), Example usage of សិន: ចាំ/VB (wait) បន្តិច/RB (a while) សិន/PA, Example usage of ចុះ: ទៅ/VB (go) ចុះ/PA
16. Preposition (IN): Preposition is a word or a compound word that is used to connect two different words or phrases. It indicate the place, time, possession, relation etc. For example, ចំពោះ (to), ដល់ (to), ដើម្បី (in order to), ក្នុង (in), លើ (on), រវាង (between, around) etc.
17. Pronoun (PRO): A pronoun is a word that substitutes of a noun or a noun phrase. Those words are equivalent to Englis word: I, he, she, it, we, they, them, him, her etc. For example, ខ្ញុំ (I), គាត់ (he or she), យើង (we), ពួកយើង (our group or we), ខ្ញុំបាទ (polite form of I, me), ទូលបង្គំ (I, me for conversation with royal family) etc.
18. Proper Noun (PN): A proper noun is a noun that represents of a unique thing, for example, name of person, name of place and name of date etc. For example: សុខា (Sokha) ភ្នំពេញ (Phnom Penh), ថ្ងៃអង្គារ (Tuesday), កាល់តិច (Caltex), មេគង្គ (Mekong) etc.
19. Question Word (QT): In Khmer language, តើ is mostly used in the beginning of an interrogative sentence. For example,
តើ/QT អ្នក/PRO (you) ឈ្មោះ/NN (name) អ្វី/PRO (what)?, "What is your name?" in English.
20. Relative Pronoun (RPN): In Khmer language, there is only one relative pronoun. It is ដែល "that, which, where, who" in English.
21. Symbol (SYM): SYM for others sign or symbol such as: +, -, \*, \/, ៖, =, @, \#, \% etc.
22. VB\_JJ: VB\_JJ is a tag for an adjective which its original form is a Verb. Currently, there is no proposed POS tag name for such kind of Khmer words. Although we can use JJ tag, we want to clarify by using VB\_JJ POS tag for its function and also for semantic purpose. For example:
- The word សម្រាប់ (for) or ដើម្បី (to) is normally removed in both written and spoken Khmer.
កន្លែង/NN (place) សម្រាប់ (for) ធ្វើការ/VB\_JJ (working), office in English
ម៉ាស៊ីន/NN (Machine) សម្រាប់ (for) បោក/VB\_JJ (washing) ខោអាវ/NN (cloth), washing machine in English
ពួកគាត់/PRO (they) អាច/VB (can) មាន/VB (have) ការងារ/NN (work) ធ្វើ/VB\_JJ (to do)
- When Khmer Relative Pronoun is removed, the verb form keep the same as it was. It must be VB\_JJ it is no longer a Verb in subbordiante clause.
សិស្ស (student) ដែល (who) មាន/VB (has) ពិន្ទុ (mark) ខ្ពស់ (hight) នឹង (will) ទទួលបាន (get) អាហារូបករណ៍ (scholarship), student who has hight mark will get a scholarship in English but when ដែល who is removed, មាន/VB (has) should become មាន/VB\_JJ (having)
23. Verb (VB): Verb is a word that shows the action, even, and condition. Verb is a middle part of phrase. Normally, verb always need object and sometime it also need complement. For example, ស្តាប់ (listen), មានប្រសាសន៍ (say), ស្រលាញ់ (love), ច្រៀង (sing), បើកបរ (drive) etc.
24. Verb Complement (VCOM): Its original form is a verb, but it will turn into VCOM when two verbs in a sentence to emphasize the first verb. Especially, a compound verb is splitted by the word មិន (no or not), the first part is a verb and the second part is VCOM. For example, លក់ (sell) ដាច់/VCOM (a lot), ប្រលង (exam) មិន (no) ជាប់/VCOM (pass), ដេក/VB (sleep), មិន/RB (not) លក់/VCOM (sleep well) etc.
## Files/Scripts
Corpus-draft-ver-1.0/ (**_latest version_**)
**Scripts:**
mk-wordtag.pl : Perl script for printing word only file, tag only file, listing compound-words etc.
mk-pair.pl : Perl script for combining word file and tag file to word/tag format
**Data:**
data/ : Data preparation folder for incremental POS-tagging models
**Models:**
Two-Hours/: Incremental training (2,000 to 12,000 sentences) of 2hours annotation approach models with khPOS corpus.
Running logfile: [note.txt](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/2hours/note.txt)
3gHMM/ : Incremental training (2,000 to 12,000 sentences) of 3-gram HMM (Hidden Markov Model) models with khPOS corpus.
Running logfile: [note.txt](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/3gHMM/note.txt)
crf/ : Incremental training (2,000 to 12,000 sentences) of CRF POS-tagging models with khPOS corpus.
Running logfile: [note.txt](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/crf/note.txt)
kytea/ : Incremental training (2,000 to 12,000 sentences) of L2 regularized SVM models with khPOS corpus.
Running logfile: [note](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/kytea/note.txt)
maxent/ : Incremental training (2,000 to 12,000 sentences) of Maximum Entrophy models with khPOS corpus.
Running logfile: [note.txt](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/maxent/note.txt)
rdr/ : Incremental training (2,000 to 12,000 sentences) of RDR (Ripple Down Rule-based) models with khPOS corpus.
Running logfile: [note.txt](https://github.com/ye-kyaw-thu/khPOS/blob/master/corpus-draft-ver-1.0/model/rdr/note.txt)
## Development and Support
Contributors
Vichet Chea
[Ye Kyaw Thu](https://sites.google.com/site/yekyawthunlp/)
## Acknowledgements
We would like to express our gratitude to Mr. Sorn Kea and Miss Leng Greyhuy for their help in POS tagging 12,100 sentences of Khmer Corpus manually.
## Publication
*Please cite following paper:*
Ye Kyaw Thu, Vichet Chea, Yoshinori Sagisaka, "Comparison of Six POS Tagging Methods on 12K Sentences Khmer Language POS Tagged Corpus", In the first Regional Conference on Optical character recognition and Natural language processing technologies for ASEAN languages (ONA 2017), December 7-8, 2017, Phnom Penh, Cambodia. [paper](https://github.com/ye-kyaw-thu/khPOS/blob/master/khpos.pdf)
## Reference
Vichet Chea, Ye Kyaw Thu, Chenchen Ding, Masao Utiyama, Andrew Finch and Eiichiro Sumita, "Khmer Word Segmentation Using Conditional Random Fields", In Khmer Natural Language Processing 2015 (KNLP2015), December 4, 2015, Phnom Penh, Cambodia.
[paper](http://khmernlp.org/2015/wp-content/uploads/2016/09/Paper-Khmer-Word-Segmentation-Using-.pdf)
Madeline Elizabeth. Ehrman, Kem Sos, Foreign Service Institute (U.S.), and Defense Language Institute (U.S.). Contemporary Cambodian: grammatical sketch, by Madeline E. Ehrman, with the assistance of Kem Sos. Foreign Service Institute, Dept. of State; \[for sale by the Supt. of Docs., U.S. Govt. Print. O .\] Washington, 1972. |
howey/super_scirep | 2023-05-10T20:33:02.000Z | [
"region:us"
] | howey | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2021}
} | null | 0 | 3 | # SuperSciRep: A Multi-Format Benchmark for Full-text Scientific Document Representations
|
akozlova/RuFacts | 2023-05-05T15:59:44.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:ru",
"license:cc-by-4.0",
"fact-checking",
"region:us"
] | akozlova | Fact-checking benchmark for the Russian Big Language Models. | null | null | 2 | 3 | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- ru
tags:
- fact-checking
size_categories:
- 1K<n<10K
---
# Dataset Card for RuFacts
## Dataset Description
RuFacts is a benchmark for internal fact-checking for the Russian language. The dataset contains tagged examples labeled consistent and inconsistent.
For inconsistent examples, ranges containing violations of facts in the source text and the generated text are also collected and presented on the [Kaggle competition page](https://www.kaggle.com/competitions/internal-fact-checking-for-the-russian-language).
Various data sources and approaches for data generation were used to create the training and test datasets for the fact-checking task. We consider the data on the sentence level and small texts. The average length of texts is 198 symbols, the minimum is 10 symbols, and the maximum is 3,402 symbols.
The final dataset was formed using three main approaches:
* Texts generated by a [paraphrase model](https://habr.com/ru/companies/sberdevices/articles/667106/)
* Translations of the [dataset for fact-checking](https://fever.ai/dataset/fever.html)
* Text augmentation
Translations and generated data were manually labeled via the crowd-sources platform Yandex.Toloka. We additionally manually annotate the augmented data for
the test set. The test set consists of examples from all three sources: 26% translations, 6% augmented data, and 68% generated paraphrases.
We require three criteria for the generated text to be factually consistent with the original:
1. facts are correct and not corrupted;
2. any additional facts in the generated texts are not included;
3. all the main facts are included in the generated text.
## Data Structure
### Data Fields
* `idx`: an integer
* `evidence`: a string containing the original text
* `claim`: a string containing the generated text by some genetative models
* `label`: an integer, either 0 or 1, indicating whether the facts are consistent (0) or inconsistent (1)
An example of `train`/`validation` looks as follows:
```
{'idx': 1,
'evidence': 'Суд в Англии рассмотрит дело советского диссидента Буковского',
'claim': 'Суд в Великобритании рассмотрит дело советского диссидента Буковского',
'label': 0}
```
An example of `test` looks as follows:
```
{'idx': 4,
'evidence': 'Google выплатит штраф в 200 млн долларов за сбор данных детей на YouTube.',
'claim': 'Google заплатит $200 млн за нарушения конфиденциальности детей на YouTube.',
'label': -1}
```
### Data Splits
| |train | validation | test|
|-----|------|------------|-----|
|rows |4677 | 1559 | 500 | |
OdiaGenAI/dolly-odia-15k | 2023-06-05T19:21:34.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:or",
"license:cc-by-nc-sa-4.0",
"region:us"
] | OdiaGenAI | null | null | null | 0 | 3 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- or
pretty_name: Dolly-Odia-15K
size_categories:
- 10K<n<100K
---
# Dataset Card for Dolly-Odia-15K
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/shantipriyap/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset is the Odia-translated version of the Dolly 15K instruction set. In this dataset both English and Odia instruction, input, and output strings are available.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
### Data Fields
instruction (string)
english_instruction (string)
input (string)
english_input (string)
output (string)
english_output (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar |
OdiaGenAI/gpt-teacher-instruct-odia-18k | 2023-05-05T20:50:37.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:or",
"license:cc-by-sa-4.0",
"region:us"
] | OdiaGenAI | null | null | null | 0 | 3 | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- or
pretty_name: GPT-Teacher-Instruct-Odia-18K
size_categories:
- 10K<n<100K
---
# Dataset Card for Odia_GPT-Teacher-Instruct-Odia-18K
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/shantipriyap/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset is the Odia-translated version of the GPT-Teacher 18K instruction set. In this dataset both English and Odia instruction, input, and output strings are available.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
### Data Fields
instruction (string)
english_instruction (string)
input (string)
english_input (string)
output (string)
english_output (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar |
OdiaGenAI/gpt-teacher-roleplay-odia-3k | 2023-05-05T20:56:24.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:or",
"license:cc-by-nc-sa-4.0",
"region:us"
] | OdiaGenAI | null | null | null | 4 | 3 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- or
pretty_name: GPT-Teacher-RolePlay-Odia-3K
size_categories:
- 1K<n<10K
---
# Dataset Card for GPT-Teacher-RolePlay-Odia-3K
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/shantipriyap/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset is the Odia-translated version of the GPT-Teacher-RolePlay 3K instruction set. In this dataset both English and Odia instruction, input, and output strings are available.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
### Data Fields
instruction (string)
english_instruction (string)
input (string)
english_input (string)
output (string)
english_output (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar |
zdy023/WikiHow-taskset | 2023-08-22T11:57:37.000Z | [
"license:apache-2.0",
"arxiv:2305.08144",
"region:us"
] | zdy023 | null | null | null | 2 | 3 | ---
license: apache-2.0
---
# WikiHow Task Set
WikiHow task set is an InfoUI interaction task set based on
[Mobile-Env](https://github.com/X-LANCE/Mobile-Env) proposed in [*Mobile-Env:
An Evaluation Platform and Benchmark for Interactive Agents in LLM
Era*](https://arxiv.org/abs/2305.08144).
[WikiHow](https://www.wikihow.com/Main-Page) is a collaborative wiki site about
various real-life tips with more than 340,000 online articles. To construct the
task set, 107,448 pages are crawled, and the dumped website data occupy about
88 GiB totally.
Several task definition templates are designed according to the functions of
WikiHow app and 5,522 task definitions are instantiated through the template
toolkit in Mobile-Env. This task set is named the *extended set*
(`wikihow-extended.tar.xz`). There may be several faults that may make the
system or the task fail in the auto-generated tasks. Therefore, 178 tasks are
sampled from the extended set and have been verified by human beings to ensure
correctness and stability, which is named the *canonical set*
(`wikihow-canonical.tar.xz`). Owing to the limit of the budgets, only 70 tasks
are tested using the proposed LLM-based agent in the corresponding pager.
These 70 tasks are given in `wikihow-microcanon.tar.xz`. We call it the
*canonical subset* or the *micro canonical set*.
### Website Data Replay
The replay script for [mitmproxy](https://mitmproxy.org/) is given as
`replay_url.py`. To use this replay script, the information retrieval tool
[Pyserini](https://github.com/castorini/pyserini/) is required. Four parameters
are expected to be assigned in the script:
+ The crawled data from WikiHow website (`dumps` in `wikihow.data.tar.xz`)
+ The HTML templates used to mock the search result page (`templates` in
`wikihow.data.tar.xz`)
+ The indices for the search engine based on Pyserini (`indices-t/indices` in
`wikihow.data.tar.xz`)
+ The metadata of the crawled articles (`indices-t/docs/doc_meta.csv` in
`wikihow.data.tar.xz`)
All the required data are offered in `wikihow.data.tar.xz`. (The archive is
about 78 GiB. And the decompressed data are about 88 GiB.) The archive is split
into two pieces (`wikihow.data.tar.xz.00` and `wikihow.data.tar.xz.01`). You
can use `cat` to concatenate them:
```sh
cat wikihow.data.tar.xz.00 wikihow.data.tar.xz.01 >wikihow.data.tar.xz
```
The SHA256 checksums are provided in `wikihow.data.tar.xz.sha256` to check the
integrity.
To run the script:
```sh
mitmproxy --showhost -s replay_url.py
```
### Certificate Unpinning Plan
The `syscert` plan proposed by Mobile-Env works for WikiHow app. You can
complete the config according to the [guideline of
Mobile-Env](https://github.com/X-LANCE/Mobile-Env/blob/master/docs/dynamic-app-en.md).
The available APK package from [APKCombo](https://apkcombo.com/) is provided.
And note to use the AVD image of version Android 11.0 (API Level 30) (Google
APIs) to obtain the best compatibility and the root-enabled ADBD.
### Human-Rewritten Instructions
Human-rewritten instructions for the *canonical set* are release under
`instruction_rewriting/`. An AndroidEnv wrapper `InstructionRewritingWrapper`
is provided to load the rewritten instructions (`merged_doccano.json`) and
public patterns (`pattern-*.txt`). The annotations are collected via
[doccano](https://doccano.github.io/doccano/). The patterns are parsed by
[`sentence_pattern.py`](instruction_rewriting/sentence_pattern.py).
|
zetavg/coct-en-zh-tw-translations-twp-300k | 2023-05-07T05:05:22.000Z | [
"task_categories:translation",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"language:en",
"region:us"
] | zetavg | null | null | null | 9 | 3 | ---
dataset_info:
features:
- name: en
dtype: string
- name: ch
dtype: string
splits:
- name: train
num_bytes: 103139635
num_examples: 310916
download_size: 75689895
dataset_size: 103139635
task_categories:
- translation
- text-generation
language:
- zh
- en
pretty_name: ~300K English ↔ Traditional Chinese Sentences from the COCT Database
size_categories:
- 100K<n<1M
---
# ~300K English ↔ Traditional Chinese Sentences from the COCT Database
The data in this dataset are collected from the Corpus of Contemporary Taiwanese Mandarin (COCT), mostly contributed by the [Taiwan Panorama](https://www.taiwan-panorama.com/) magazine. |
genta-tech/boolq-id | 2023-05-09T19:46:01.000Z | [
"task_categories:text-classification",
"task_categories:feature-extraction",
"size_categories:10K<n<100K",
"language:id",
"license:cc-by-sa-4.0",
"super_glue",
"text similarity",
"region:us"
] | genta-tech | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: passage
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 4300375
num_examples: 9427
download_size: 2503993
dataset_size: 4300375
license: cc-by-sa-4.0
task_categories:
- text-classification
- feature-extraction
language:
- id
tags:
- super_glue
- text similarity
size_categories:
- 10K<n<100K
---
# Dataset Card for "boolq-id"
This dataset is a translated version of qnli dataset from [super_glue](https://huggingface.co/datasets/super_glue) dataset.
# Citing & Authors
```
@inproceedings{clark2019boolq,
title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
booktitle={NAACL},
year={2019}
}
@article{wang2019superglue,
title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
journal={arXiv preprint arXiv:1905.00537},
year={2019}
}
``` |
genta-tech/squad_pairs_indo | 2023-05-07T08:00:03.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:id",
"license:cc-by-4.0",
"region:us"
] | genta-tech | null | null | null | 0 | 3 | ---
license: cc-by-4.0
task_categories:
- question-answering
language:
- id
size_categories:
- 10K<n<100K
---
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
This is an Indonesia-translated version of [squad]("https://huggingface.co/datasets/squad") dataset
Translated from [sentence-transformers/embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data)
Translated using [Helsinki-NLP/EN-ID](https://huggingface.co/Helsinki-NLP/opus-mt-en-id) |
MattiaL/tapir-cleaned-67k | 2023-05-09T08:01:49.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"region:us"
] | MattiaL | null | null | null | 1 | 3 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: Tapir-Cleaned
task_categories:
- text-generation
size_categories:
- 10K<n<100K
---
# Dataset Card for Tapir-Cleaned
This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning.
## Tapir Dataset Summary
Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the IFTTT platform.
After a thorough cleaning process that involved the removal of redundant and inconsistent recipes, the refined dataset was condensed to include 67,697 high-quality recipes.
This curated set of instruction data is particularly useful for conducting instruction-tuning exercises for language models,
allowing them to more accurately follow instructions and achieve superior performance.
The last version of Tapir includes a correlation score that helps to identify the most appropriate description-rule pairs for instruction tuning.
Description-rule pairs with a score greater than 0.75 are deemed good enough and are prioritized for further analysis and tuning.
### Supported Tasks and Leaderboards
The Tapir dataset designed for instruction training pretrained language models
### Languages
The data in Tapir are mainly in English (BCP-47 en).
# Dataset Structure
### Data Instances
```json
{
"instruction":"From the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.",
"input":"If it's raining outside, you'll want some nice warm colors inside!",
"output":"IF Weather Underground Current condition changes to THEN LIFX Change color of lights",
"score":"0.788197",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nFrom the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.\n\n### Input:\nIf it's raining outside, you'll want some nice warm colors inside!\n\n### Response:\nIF Weather Underground Current condition changes to THEN LIFX Change color of lights",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform.
* `input`: context or input for the task. Each of the 67K input is unique.
* `output`: the answer taken from the original Tapir Dataset formatted as an IFTTT recipe.
* `score`: the correlation score obtained via BertForNextSentencePrediction
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors of Alpaca for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| tapir | 67697 |
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{tapir,
author = {Mattia Limone, Gaetano Cimino, Annunziata Elefante},
title = {TAPIR: Trigger Action Platform for Information Retrieval},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/MattiaLimone/ifttt_recommendation_system}},
}
``` |
neurae/dnd_style_intents | 2023-07-16T08:10:05.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"D&D",
"intent",
"classification",
"region:us"
] | neurae | null | null | null | 4 | 3 | ---
dataset_info:
features:
- name: examples
dtype: string
- name: label_names
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 9654988
num_examples: 130570
- name: test
num_bytes: 1208016
num_examples: 16330
- name: eval
num_bytes: 1203046
num_examples: 16321
download_size: 5759885
dataset_size: 12066050
task_categories:
- text-classification
language:
- en
size_categories:
- 100K<n<1M
tags:
- D&D
- intent
- classification
pretty_name: D&D Style Intents
license: apache-2.0
---
# Dataset Card for "dnd_style_intents"
This dataset was designed for intent classification module in dialogue system for game developers.
There are about 163K examples over 17 intents in dataset.
All intents belong to one of two group: intents for interaction with game mechanics and intents for more correctly dialogue understanding.
Data was generated artificially and augmented with masking and paraphrase model. All examples are in D&D style. |
genta-tech/qnli-id | 2023-05-09T19:40:54.000Z | [
"task_categories:feature-extraction",
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:id",
"license:cc-by-sa-4.0",
"glue",
"Text Similarity",
"region:us"
] | genta-tech | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: sentence
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 25845146
num_examples: 104743
- name: test
num_bytes: 1380442
num_examples: 5463
- name: validation
num_bytes: 1376422
num_examples: 5463
download_size: 18108260
dataset_size: 28602010
license: cc-by-sa-4.0
task_categories:
- feature-extraction
- text-classification
language:
- id
size_categories:
- 100K<n<1M
tags:
- glue
- Text Similarity
---
# Dataset Card for "qnli-id"
This dataset is a translated version of qnli dataset from [glue](https://huggingface.co/datasets/glue) dataset.
# Citing & Authors
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
``` |
matejklemen/akces_gec | 2023-05-08T19:20:17.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | matejklemen | AKCES-GEC is a grammar error correction corpus for Czech generated from a subset of AKCES resources. | @article{naplava2019wnut,
title={Grammatical Error Correction in Low-Resource Scenarios},
author={N{\'a}plava, Jakub and Straka, Milan},
journal={arXiv preprint arXiv:1910.00353},
year={2019}
} | null | 0 | 3 | ---
license: cc-by-nc-sa-4.0
dataset_info:
- config_name: ann0
features:
- name: src_tokens
sequence: string
- name: tgt_tokens
sequence: string
- name: corrections
list:
- name: idx_src
sequence: int32
- name: idx_tgt
sequence: int32
- name: corr_types
sequence: string
splits:
- name: train
num_bytes: 11199287
num_examples: 42210
- name: validation
num_bytes: 713686
num_examples: 2485
- name: test
num_bytes: 741411
num_examples: 2676
download_size: 3534547
dataset_size: 12654384
- config_name: ann1
features:
- name: src_tokens
sequence: string
- name: tgt_tokens
sequence: string
- name: corrections
list:
- name: idx_src
sequence: int32
- name: idx_tgt
sequence: int32
- name: corr_types
sequence: string
splits:
- name: train
num_bytes: 8124054
num_examples: 42210
- name: validation
num_bytes: 618583
num_examples: 2485
- name: test
num_bytes: 655536
num_examples: 2676
download_size: 3534547
dataset_size: 9398173
---
There are two configs: `ann0` (default) and `ann1`. These correspond to the annotator ID whose annotations will be loaded.
**Important:** Annotations from annotator 1 only exist for the dev set so the training and test set will have no annotations.
It is up to the user to combine the annotations somehow. |
h2oai/openassistant_oasst1_h2ogpt_graded | 2023-05-09T03:22:25.000Z | [
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] | h2oai | null | null | null | 1 | 3 | ---
license: apache-2.0
language:
- en
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
---
# h2oGPT Data Card
## Summary
H2O.ai's `openassistant_oasst1_h2ogpt_graded` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `30368`
- Number of columns: `5`
- Column names: `['input', 'source', 'prompt_type', 'grade_deberta', 'id']`
## Source
- [Original Open Assistant data in tree structure](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [This flattened dataset created by script in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/d1f8ce975a46056d41135d126dd33de8499aa26e/create_data.py#L1259)
|
illuin/ESLO | 2023-05-15T15:21:41.000Z | [
"task_categories:automatic-speech-recognition",
"language:fr",
"license:cc-by-nc-4.0",
"region:us"
] | illuin | ESLO dataset, each utterance are taken out individually | @misc{11403/eslo/v1,
title = {ESLO},
author = {LLL},
url = {https://hdl.handle.net/11403/eslo/v1},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols} {for} {LANGuage}) \textendash www.ortolang.fr},
copyright = {Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International},
year = {2023}
} | null | 0 | 3 | ---
task_categories:
- automatic-speech-recognition
language:
- fr
license: cc-by-nc-4.0
---
ESLO audio dataset
configs:
- no_overlap_no_hesitation
- no_hesitation
- no_overlap
- raw
Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
Dependencies:
- ffmpeg: `sudo apt-get install ffmpeg`
- ffmpeg-python: `pip install ffmpeg-python`
```
{'audio': {'array': array([-0.00250244, 0.00039673, 0.00326538, ..., 0.01953125,
0.02206421, 0.02304077]),
'path': None,
'sampling_rate': 16000},
'end_timestamp': 8.939,
'file': 'ESLO1_INTPERS_437',
'overlap': False,
'sentence': "eh bien je voudrais vous demander d'abord en quoi consiste votre "
'entreprise ici ? exactement',
'speaker': 'spk1',
'start_timestamp': 0.954}
```
Eshkol-Taravella I., Baude O., Maurel D., Hriba L., Dugua C., Tellier I., (2012), Un grand corpus oral « disponible » : le corpus d’Orléans 1968-2012., in Ressources linguistiques libres, TAL. Volume 52 – n° 3/2011, 17-46
Laboratoire Ligérien de Linguistique - UMR 7270 (LLL) (2023). ESLO [Corpus]. ORTOLANG (Open Resources and TOols for LANGuage) - www.ortolang.fr, v1, https://hdl.handle.net/11403/eslo/v1. |
LennardZuendorf/Dynamically-Generated-Hate-Speech-Dataset | 2023-05-16T16:01:46.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"language:en",
"not-for-all-audiences",
"legal",
"arxiv:2012.15761",
"region:us"
] | LennardZuendorf | null | null | null | 1 | 3 | ---
task_categories:
- text-classification
- text-generation
language:
- en
tags:
- not-for-all-audiences
- legal
pretty_name: dynamically generated hate speech dataset
---
# Dataset Card for dynamically generated hate speech dataset
## Dataset Description
- **Homepage:** [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset)
- **Point of Contact:** [bertievidgen@gmail.com](mailto:bertievidgen@gmail.com)
### Dataset Summary
This is a copy of the Dynamically-Generated-Hate-Speech-Dataset, presented in [this paper](https://arxiv.org/abs/2012.15761) by
- **Bertie Vidgen**, **Tristan Thrush**, **Zeerak Waseem** and **Douwe Kiela**
## Original README from [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset/blob/main/README.md)
## Dynamically-Generated-Hate-Speech-Dataset
ReadMe for v0.2 of the Dynamically Generated Hate Speech Dataset from Vidgen et al. (2021). If you use the dataset, please cite our paper in the Proceedings of ACL 2021, and available on [Arxiv](https://arxiv.org/abs/2012.15761).
Contact Dr. Bertie Vidgen if you have feedback or queries: bertievidgen@gmail.com.
The full author list is: Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook AI Research), Zeerak Waseem (University of Sheffield) and Douwe Kiela (Facebook AI Research). This paper is an output of the Dynabench project: https://dynabench.org/tasks/5#overall
### Dataset descriptions
v0.2.2.csv is the full dataset used in our ACL paper.
v0.2.3.csv removes duplicate entries, all of which occurred in round 1. Duplicates come from two sources: (1) annotators entering the same content multiple times and (2) different annotators entering the same content. The duplicates are interesting for understanding the annotation process, and the challenges of dynamically generating datasets. However, they are likely to be less useful for training classifiers and so are removed in v0.2.3. We did not lower case the text before removing duplicates as capitalisations contain potentially useful signals.
### Overview
The Dynamically Generated Hate Speech Dataset is provided in one table.
'acl.id' is the unique ID of the entry.
'Text' is the content which has been entered. All content is synthetic.
'Label' is a binary variable, indicating whether or not the content has been identified as hateful. It takes two values: hate, nothate.
'Type' is a categorical variable, providing a secondary label for hateful content. For hate it can take five values: Animosity, Derogation, Dehumanization, Threatening and Support for Hateful Entities. Please see the paper for more detail. For nothate the 'type' is 'none'. In round 1 the 'type' was not given and is marked as 'notgiven'.
'Target' is a categorical variable, providing the group that is attacked by the hate. It can include intersectional characteristics and multiple groups can be identified. For nothate the type is 'none'. Note that in round 1 the 'target' was not given and is marked as 'notgiven'.
'Level' reports whether the entry is original content or a perturbation.
'Round' is a categorical variable. It gives the round of data entry (1, 2, 3 or 4) with a letter for whether the entry is original content ('a') or a perturbation ('b'). Perturbations were not made for round 1.
'Round.base' is a categorical variable. It gives the round of data entry, indicated with just a number (1, 2, 3 or 4).
'Split' is a categorical variable. it gives the data split that the entry has been assigned to. This can take the values 'train', 'dev' and 'test'. The choice of splits is explained in the paper.
'Annotator' is a categorical variable. It gives the annotator who entered the content. Annotator IDs are random alphanumeric strings. There are 20 annotators in the dataset.
'acl.id.matched' is the ID of the matched entry, connecting the original (given in 'acl.id') and the perturbed version.
For identities (recorded under 'Target') we use shorthand labels to constructed the dataset, which can be converted (and grouped) as follows:
none -> for non hateful entries
NoTargetRecorded -> for hateful entries with no target recorded
mixed -> Mixed race background
ethnic minority -> Ethnic Minorities
indig -> Indigenous people
indigwom -> Indigenous Women
non-white -> Non-whites (attacked as 'non-whites', rather than specific non-white groups which are generally addressed separately)
trav -> Travellers (including Roma, gypsies)
bla -> Black people
blawom -> Black women
blaman -> Black men
african -> African (all 'African' attacks will also be an attack against Black people)
jew -> Jewish people
mus -> Muslims
muswom -> Muslim women
wom -> Women
trans -> Trans people
gendermin -> Gender minorities,
bis -> Bisexual
gay -> Gay people (both men and women)
gayman -> Gay men
gaywom -> Lesbians
dis -> People with disabilities
working -> Working class people
old -> Elderly people
asi -> Asians
asiwom -> Asian women
east -> East Asians
south -> South Asians (e.g. Indians)
chinese -> Chinese people
pak -> Pakistanis
arab -> Arabs, including people from the Middle East
immig -> Immigrants
asylum -> Asylum seekers
ref -> Refguees
for -> Foreigners
eastern european -> Eastern Europeans
russian -> Russian people
pol -> Polish people
hispanic -> Hispanic people, including latinx and Mexicans
nazi -> Nazis ('Support' type of hate)
hitler -> Hitler ('Support' type of hate)
### Code
Code was implemented using hugging face transformers library.
## Additional Information
### Licensing Information
The original repository does not provide any license, but is free for use with proper citation of the original paper in the Proceedings of ACL 2021, available on [Arxiv](https://arxiv.org/abs/2012.15761)
### Citation Information
cite as [arXiv:2012.15761](https://arxiv.org/abs/2012.15761)
or [https://doi.org/10.48550/arXiv.2012.15761](https://[doi.org/10.48550/arXiv.2012.15761) |
abatilo/myanimelist-embeddings | 2023-05-09T20:51:17.000Z | [
"task_categories:text-classification",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | abatilo | null | null | null | 1 | 3 | ---
license: mit
task_categories:
- text-classification
- summarization
language:
- en
pretty_name: MyAnimeList Embeddings
size_categories:
- 10K<n<100K
---
# myanimelist-embeddings
This dataset is every non-empty anime synopsis from [MyAnimeList.net](https://myanimelist.net) ran
through the `embed-multilingual-v2.0` embedding model from [Cohere AI](https://cohere.com).
## Sample code for searching for anime
Install some dependencies
```
pip install cohere==4.4.1 datasets==2.12.0 torch==2.0.1
```
Code heavily inspired by the [Cohere Wikipedia embeddings sample](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings#search)
```python
import os
import cohere
import torch
from datasets import load_dataset
co = cohere.Client(
os.environ["COHERE_API_KEY"]
) # Add your cohere API key from www.cohere.com
docs_stream = load_dataset(
f"abatilo/myanimelist-embeddings", split="train", streaming=True
)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc["embedding"])
doc_embeddings = torch.tensor(doc_embeddings)
while True:
query = input("What do you want to see?: ")
response = co.embed(texts=[query], model="embed-multilingual-v2.0")
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]["title"])
print(docs[doc_id]["synopsis"], "\n")
```
## Sample search queries
### high schoolers with super powers fight evil
```
What do you want to see?: high schoolers with super powers fight evil
Kigurumi Sentai Quiltian
Twin schoolgirls transform into their superhero aspects to save the world from an evil cabal of would-be dictators, but they can only fight for justice by having a lot of sex.
(Source: ANN)
Kekkaishi
Yoshimura Sumimura comes from a long line of "Kekkaishi," individuals who have supernatural abilities and are able to destroy evil creatures called Ayakashi that venture into the human realm from time to time. The Ayakashi are demons that look to feast on the power emanating from the land of Karasumori, which also happens to be where Yoshimura's high school is located. Now, Yoshimura must fight to protect his beloved school and hometown. Although, if it were up to him, he would rather be baking cakes than fighting off the ugly characters that show up at night.
Thankfully, Yoshimura isn't the only one helping to keep the baddies at bay. His childhood friend and neighbor, Tokine Yukimura, joins him in this righteous battle. Despite the fact that they are from rival clans, these two make a fantastic team. And teamwork is something vital to fighting the evil that is closing in, as the Ayakashi attack in waves, looking to claim the land as their own, and a shadowy organization looks on, ready to pounce when the time is right...
Shiritsu Araiso Koutougakkou Seitokai Shikkoubu
Kubota Makoto and Tokitoh Minoru (characters from Kazuya Minekura's manga Wild Adaptor—though no reference is made to the darker storyline of WA in this light-hearted anime)—are the muscle of their high school's all-powerful student council. They defend the student body from disorder—generated by both humans and demons—while avoiding their classes.
(Source: ANN)
```
### a pokemon trainer wants to be the very best
```
What do you want to see?: a pokemon trainer wants to be the very best
Pokemon
Pokémon are peculiar creatures with a vast array of different abilities and appearances; many people, known as Pokémon trainers, capture and train them, often with the intent of battling others. Young Satoshi has not only dreamed of becoming a Pokémon trainer but also a "Pokémon Master," and on the arrival of his 10th birthday, he finally has a chance to make that dream a reality. Unfortunately for him, all three Pokémon available to beginning trainers have already been claimed and only Pikachu, a rebellious Electric-type Pokémon, remains. However, this chance encounter would mark the start of a lifelong friendship and an epic adventure!
Setting off on a journey to become the very best, Satoshi and Pikachu travel across beautiful, sprawling regions with their friends Kasumi, a Water-type trainer, and Takeshi, a Rock-type trainer. But danger lurks around every corner. The infamous Team Rocket is always nearby, seeking to steal powerful Pokémon through nefarious schemes. It'll be up to Satoshi and his friends to thwart their efforts as he also strives to earn the eight Pokémon Gym Badges he'll need to challenge the Pokémon League, and eventually claim the title of Pokémon Master.
[Written by MAL Rewrite]
Pokemon Best Wishes!
As with both the Advanced Generation and Diamond & Pearl series before it, the Best Wishes! series begins with only Satoshi, headed off to the Isshu region, located far away from Kanto, Johto, Houen, and Sinnoh, with his Pikachu. After he meets up with the new trainer and rival Shooty and the region's Professor Araragi, he gains traveling companions in Iris, a girl from a town known for its Dragon Pokémon, and Dent, Pokémon Connoisseur and the Grass Pokémon specialist of the three Sanyou City Gym Leaders.
Pokemon Sun & Moon
After his mother wins a free trip to the islands, Pokémon trainer Satoshi and his partner Pikachu head for Melemele Island of the beautiful Alola region, which is filled with lots of new Pokémon and even variations of familiar faces. Eager to explore the island, Satoshi and Pikachu run wild with excitement, quickly losing their way while chasing after a Pokémon. The pair eventually stumbles upon the Pokémon School, an institution where students come to learn more about these fascinating creatures.
At the school, when he and one of the students—the no-nonsense Kaki—have a run-in with the nefarious thugs of Team Skull, Satoshi discovers the overwhelming might of the Z-Moves, powerful attacks originating from the Alola region that require the trainer and Pokémon to be in sync. Later that night, he and Pikachu have an encounter with the guardian deity Pokémon of Melemele Island, the mysterious Kapu Kokeko. The Pokémon of legend bestows upon them a Z-Ring, a necessary tool in using the Z-Moves. Dazzled by their earlier battle and now in possession of a Z-Ring, Satoshi and Pikachu decide to stay behind in the Alola Region to learn and master the strength of these powerful new attacks.
Enrolling in the Pokémon School, Satoshi is joined by classmates such as Lillie, who loves Pokémon but cannot bring herself to touch them, Kaki, and many others. Between attending classes, fending off the pesky Team Rocket—who themselves have arrived in Alola to pave the way for their organization's future plans—and taking on the Island Challenge that is necessary to master the Z-Moves, Satoshi and Pikachu are in for an exciting new adventure.
[Written by MAL Rewrite]
```
### hunting demons with swords
```
What do you want to see?: hunting demons with swords
Grandeek
This is a tale of swords and sorcery as the young warrior-woman Tia Allbright and her hapless assistant, Luke, battle demon assassins in a fantasy land.
Tia arrives on the island of Marcleida with her trusted sword 'Grandeek,' which holds a spirit within that helps her on her quests. She is soon turned away however. Determined to get on the island, Tia searches for a way past the fences that guard the entrance, as another stranger arrives on the island to take on a mysterious job. Someone has been killing the inhabitants of the island and has the ability to appear and disappear at will. Seems the sword 'Aihorn' has been stolen and the spirit that resides within it seeks vengenance on those who killed its master 50 years before.
As Tia makes her way inside the island, it becomes clear that both she, and the stranger, are after the sword Aihorn, hoping to bring to an end its bloody goal. But the sword has the ability to possess the person who wields it - putting Tia and the stranger at a great disadvantage.
Based on the manga by Kohime Ohse, Tia and Grandeek will have to face their most difficult challenge yet...
(Source: AnimeNfo)
Bemubemu Hunter Kotengu Tenmaru
Adventures of a demon slayer Tenmaru.
Karasu Tengu Kabuto
500 years ago in the Tensho Era of Japan, a man was born who defied the will of a demon; a man who had gods of good on his side; a man destined to battle evil....his name was Kabuto. Somehow, Kuroyasya Douki, the vile Black Night Demon, escaped his prison in hell and returned to the earthly plane to wreak vengeance on the family-line of Kabuto. None can escape his deadly magic and masterful skills with the blade; however, the gods of the North, West, East, and South band together to help Kabuto stand for Justice. With the questionable help of a diabolical talking sword that his own father forged, Kabuto may live another day to see his own sons born....
``` |
ewof/alpaca-instruct-unfiltered | 2023-05-13T03:54:52.000Z | [
"region:us"
] | ewof | null | null | null | 2 | 3 | This dataset is https://github.com/tatsu-lab/stanford_alpaca unfiltered, removing 2095 instances of blatant alignment.
49907 instructions remain.
clean.py was first ran on https://github.com/tatsu-lab/stanford_alpaca/blob/65512697dc67779a6e53c267488aba0ec4d7c02a/alpaca_data.json
normal dedupe.py script didn't find any dupes here.
inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py |
rishabhjain16/myst_pf_ot50 | 2023-05-10T12:18:19.000Z | [
"region:us"
] | rishabhjain16 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 8509570768.06
num_examples: 19332
- name: test
num_bytes: 1447570290.631
num_examples: 3317
download_size: 8974808612
dataset_size: 9957141058.691
---
# Dataset Card for "myst_pf_ot50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
silk-road/Vanilla-chinese-alpaca-luotuo | 2023-05-12T23:17:41.000Z | [
"size_categories:10K<n<100K",
"language:zh",
"license:apache-2.0",
"region:us"
] | silk-road | null | null | null | 13 | 3 | ---
license: apache-2.0
language:
- zh
pretty_name: f
size_categories:
- 10K<n<100K
---
Vanilla骆驼是骆驼项目在23年3月21日启动的第一个数据集和模型
我们会陆续将更多数据集发布到hf,包括
- [ ] Coco Caption的中文翻译
- [ ] CoQA的中文翻译
- [ ] CNewSum的Embedding数据
- [ ] 增广的开放QA数据
- [ ] WizardLM的中文翻译
如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。
# 骆驼(Luotuo): 开源中文大语言模型
[https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM)
骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。
( 注意: [陈启源](https://qiyuan-chen.github.io/) 正在寻找2024推免导师,欢迎联系 )
骆驼项目**不是**商汤科技的官方产品。
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca,
author={Ziang Leng, Qiyuan Chen and Cheng Li},
title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}},
}
```
|
sanchit-gandhi/tedlium-data | 2023-05-11T12:18:03.000Z | [
"region:us"
] | sanchit-gandhi | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: gender
dtype:
class_label:
names:
'0': unknown
'1': female
'2': male
- name: file
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 52384399934.125
num_examples: 268263
- name: validation
num_bytes: 197798071.0
num_examples: 591
- name: test
num_bytes: 352803076.375
num_examples: 1469
download_size: 52658646425
dataset_size: 52935001081.5
---
# Dataset Card for "tedlium-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NEUDM/semeval-2015 | 2023-05-23T17:16:33.000Z | [
"language:en",
"region:us"
] | NEUDM | null | null | null | 1 | 3 | ---
language:
- en
---
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。
补充:SemEval-2015数据集文件夹中有两个文件夹"laptop"和"restaurant",其实根据数据集文本的主要围绕主题区分的。抽取的元素方面,laptop和restaurant两文件夹中,数据的抽取元素也不同,laptop抽取的是方面类别和情感极性的二元组,restaurant抽取的是方面术语、方面类别和情感极性的三元组
#### 以acos数据集中抽取的jsonl文件一条数据举例:
```
{
"task_type": "generation",
"dataset": "acos",
"input": ["the computer has difficulty switching between tablet and computer ."],
"output": "[['computer', 'laptop usability', 'negative', 'difficulty']]",
"situation": "none",
"label": "",
"extra": "",
"instruction": "
Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words.
Input: A sentence
Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence.
Example:
Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\"
Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]'
"
}
```
> 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。
#### 原始数据集
- 数据[链接](https://alt.qcri.org/semeval2015/task12/)
- Paper:[SemEval-2015 Task 12: Aspect Based Sentiment Analysis](https://aclanthology.org/S15-2082/)
- 说明:数据分为Laptop和restaurant两个主题的数据,分别在两个文件夹中放置。两个主题的数据抽取的元素不同。
#### 当前SOTA
*数据来自[PaperWithCode](https://paperswithcode.com/sota)*
- SemEval2015-Laptop
未调研到该部分数据的评测
- [SemEval2015-Restaurant](https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-semeval-4)
- 评价指标:Accuracy(抽取的分类准确率)
- 模型:HAABSA++ (**81.7**)
- Paper:[A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep Contextual Word Embeddings and Hierarchical Attention](https://paperswithcode.com/paper/a-hybrid-approach-for-aspect-based-sentiment-1)
|
gshbao/DocNMT | 2023-05-12T07:52:30.000Z | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"language:de",
"license:afl-3.0",
"region:us"
] | gshbao | null | null | null | 1 | 3 | ---
license: afl-3.0
task_categories:
- translation
language:
- en
- de
pretty_name: Doc-Level NMT
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
### Dataset Summary
The benchmark datasets for document-level machine translation.
### Supported Tasks
Document-level Machine Translation Tasks.
### Languages
English-German
## Dataset Structure
### Data Instances
TED: iwslt17, News: nc2016, Europarl: europarl7
### Data Fields
Pure text that each line represents a sentence and multiple lines separated by '\<d\>' line form a document.
### Data Splits
train, dev, test
### Data Usage
This dataset is created for the convenience of usage by https://github.com/baoguangsheng/g-transformer
|
Haidra-Org/AI-Horde-Ratings | 2023-10-10T22:02:23.000Z | [
"language:en",
"license:cc-by-sa-4.0",
"ratings",
"stable diffusion",
"aesthetic",
"artifacts",
"region:us"
] | Haidra-Org | null | null | null | 3 | 3 | ---
license: cc-by-sa-4.0
language:
- en
tags:
- ratings
- stable diffusion
- aesthetic
- artifacts
pretty_name: AI Horde Ratings
---
# AI Horde Aesthetic and Artifact Ratings
A dataset of exported aesthetic and artifact ratings provided by the [AI Horde](https://aihorde.net) community through our [open ratings API](https://ratings.aihorde.net/api).
Each row in this dataset presents the rating for a single image from the [diffusiondb](https://poloclub.github.io/diffusiondb/). Each image UUID in this parquet will match the diffusiondb filename.
Each rating contains an aesthetic rating of 1-10, where 1 represents an image found distasteful, and 10 an image most found very pleasing. This is an explicitly subjective rating.
Each rating also contains an artifact rating of 0-5, where 0 represents no artifacts or image disruption, and 5 represents an image ruined. This ratings aims to be more objective.
The aim is for each image to be rated at least 5 times, so that a useful average can be ascertained.
While there are countermeasures to avoid bad actors, due to the open nature of the API for the ratings, some ratings might be random or malicious.
However due to the vast amount of other valid ratings, they overarching trend should be towards accuracy.
Nevertheless, if you notice any ratings which are obviously malicious, or users which are consistently fake-rating, please let us know and we'll clear them from this dataset.
# Structure
The columns in the dataset are as follows
* ratings_count: How many times this image has been rated throughout this dataset
* rating: The aesthetic (1-10) rating.
* kudos: The amount of kudos (i.e. priority) the user had at the moment of rating this image. Higher values represent users who have positively contributed to the AI Horde. This can be used to discover bad actors. (-50 are anonymous ratings)
* account_age: How old the user account is. This can be used to discover bad actors.
* usage_requests: How many images this user has generated at the moment of rating this image. This can be used to discover bad actors.
* created_at: When this rating was added
* client_agent: The client which was used to provide this rating. Unknown clients are more suspicious. This can be used to discover bad actors.
* artifacts: The artifacts (0-5) rating.
* user_id: The hashed user id who provided this rating
* trusted: If true, this user has been trusted by the horde by generating images or text for others for a long amount of time.
* validated: If true, this user's ratings have been manually validated by one of the AI Horde moderators.
* captchas_failed: How many captchas this user has failed. This can be used to discover bad actors. This is cumulative with succeeded captchas, so a negative amount means that many more succeeded captchas over failed ones.
* country: From which country did the rating originate. This can be used to create location-based rating models.
# Use cases
* [Clip-based aesthetic scorer](https://github.com/kenjiqq/aesthetics-scorer) ([Huggingface Demo](https://huggingface.co/spaces/kenjiqq/aesthetics-scorer)) |
biglam/on_the_books | 2023-06-07T08:44:39.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-3.0",
"lam",
"legal",
"region:us"
] | biglam | This file is the training set that was used to train an algorithm to identify Jim Crow laws.
It contains laws that are labeled as "Jim Crow" (jim_crow=1) or "Not Jim Crow" (jim_crow=0).
The source of the determination is also provided. | TODO | null | 0 | 3 | ---
license: cc-by-3.0
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: jim_crow
dtype:
class_label:
names:
'0': no_jim_crow
'1': jim_crow
- name: type
dtype: string
- name: chapter_num
dtype: int32
- name: section_num
dtype: int32
- name: chapter_text
dtype: string
- name: section_text
dtype: string
splits:
- name: train
num_bytes: 2119395
num_examples: 1785
download_size: 2085196
dataset_size: 2119395
task_categories:
- text-classification
language:
- en
tags:
- lam
- legal
pretty_name: On the Books Training Set
size_categories:
- 1K<n<10K
--- |
tasksource/I2D2 | 2023-05-31T08:34:55.000Z | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"commonsense",
"arxiv:2212.09246",
"region:us"
] | tasksource | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- commonsense
---
code:
https://i2d2.allen.ai/
https://arxiv.org/abs/2212.09246
```
@inproceedings{Bhagavatula2022GenGen,
title={Generating Generics: Knowledge Induction with NeuroLogic and Self-Imitation},
author={Chandra Bhagavatula, Jena D. Hwang, Doug Downey, Ronan Le Bras, Ximing Lu, Lianhui Qin, Keisuke Sakaguchi, Swabha Swayamdipta, Peter West, Yejin Choi},
booktitle={arXiv},
year={2022}
}
``` |
WasuratS/ECMWF_Thailand_Land_Air_Temperatures | 2023-05-15T01:20:10.000Z | [
"task_categories:time-series-forecasting",
"size_categories:100M<n<1B",
"license:eupl-1.1",
"climate",
"region:us"
] | WasuratS | null | null | null | 0 | 3 | ---
license: eupl-1.1
task_categories:
- time-series-forecasting
tags:
- climate
size_categories:
- 100M<n<1B
---
# Dataset Summary
Contains hourly 2 meters of land (on-shore) air temperature data within grid areas of Thailand country. <br/>
Data is retrieved from [Corpernicus Climate Data Store](https://cds.climate.copernicus.eu/cdsapp#!/home) on [ERA5-Land hourly data from 1950 to present](https://cds.climate.copernicus.eu/cdsapp#!/dataset/10.24381/cds.e2161bac?tab=overview)
<br/>
Thailand areas in this context is **Latitude** = **[5.77434, 20.43353]** and **Longitude** = **[97.96852, 105.22908]** <br/>
For more details of data, you can refer to [ERA5-Land hourly data from 1950 to present](https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-land?tab=overview)
- Data Granularity: Hourly per Latitude/ Longitude
- Period: **31/Dec/1999** - **08/May/2023**
- Temperature Unit: Celsius (°C) (Original data from [ERA5-Land hourly data from 1950 to present](https://cds.climate.copernicus.eu/cdsapp#!/dataset/10.24381/cds.e2161bac?tab=overview) is Kelvin)
# Source Data
- Organization of the producer: ECMWF
# Data Creation
Below is an example of how to make data query using Python via [CDS API](https://cds.climate.copernicus.eu/api-how-to) in monthly requests. <br/>
Script can be found [here](https://huggingface.co/datasets/WasuratS/ECMWF_Thailand_Land_Air_Temperatures/blob/main/cds_api_requestor_example.py)
``` python
import cdsapi
c = cdsapi.Client()
month_list = [str(num).zfill(2) for num in range(1, 13)]
day_list = [str(num).zfill(2) for num in range(1, 32)]
time_list = [str(num).zfill(2) + ":00" for num in range(0, 24)]
year_list = [str(num) for num in range(2000, 2022)]
for year in year_list:
for month in month_list:
c.retrieve('reanalysis-era5-land',
{
'variable': [
'2m_temperature']
,
'year': year,
'month' : month,
'day': day_list,
'time': time_list,
'format': 'grib',
'area': [
20.43, 97.96, 5.77,
105.22,
],
},
f'{year}_{month}_hourly_2m_temp_TH.grib')
```
Direct file output from API is in ```.grib``` format, to make it easy for further analysis work, I have converted it to ```.parquet``` format. <br/>
To convert GRIB format to pandas dataframe, you can use [xrray](https://github.com/pydata/xarray) and [cfgrib](https://github.com/ecmwf/cfgrib) library to help as below example snippet of code.
``` python
import xarray as xr
import cfgrib
ds = xr.open_dataset('2022_12_31_hourly_2m_temp_TH.grib', engine='cfgrib')
df = ds.to_dataframe().reset_index()
```
## Licensing
[Climate Data Store Product Licensing](https://cds.climate.copernicus.eu/api/v2/terms/static/licence-to-use-copernicus-products.pdf)
## Citation
- This data was generated using **Copernicus Climate Change Service** information and <br/>
contains modified **Copernicus Climate Change Service** information on 1999/Dec/31 - 2023/May/08 data period
- Muñoz Sabater, J. (2019): ERA5-Land hourly data from 1950 to present. <br/>
Copernicus Climate Change Service (C3S) Climate Data Store (CDS). <br/>
DOI: [10.24381/cds.e2161bac](https://cds.climate.copernicus.eu/cdsapp#!/dataset/10.24381/cds.e2161bac?tab=overview) (Accessed on 13-May-2023)
- Copernicus Climate Change Service (C3S) (2022): ERA5-Land hourly data from 1950 to present. <br/>
Copernicus Climate Change Service (C3S) Climate Data Store (CDS). <br/>
DOI: [10.24381/cds.e2161bac](https://cds.climate.copernicus.eu/cdsapp#!/dataset/10.24381/cds.e2161bac?tab=overview) (Accessed on 13-May-2023) |
Englishman2022/prosocial-dialog-filtered | 2023-05-14T17:48:49.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_ids:dialogue-generation",
"task_ids:multi-class-classification",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:Proso... | Englishman2022 | null | null | null | 1 | 3 | ---
license: cc-by-4.0
task_categories:
- conversational
- text-classification
language:
- en
source_datasets:
- ProsocialDialog
language_creators:
- crowdsourced
- machine-generated
multilinguality:
- monolingual
pretty_name: ProsocialDialogFiltered
tags:
- dialogue
- dialogue safety
- social norm
- rules-of-thumb
size_categories:
- 10K<n<100K
task_ids:
- dialogue-generation
- multi-class-classification
---
## Dataset Summary
ProsocialDialogFiltered is a filtered version of the ProsocialDialog dataset.
Multiple versions are present:
- In train_no_casual, rows with the label "casual" have been filtered out as a starting point.
- In train_no_possibly, rows with "possibly needs caution" have been filtered out.
- In train_no_probably, rows with "probably needs caution" have been filtered out, as I found those to be largely pointless as well, leaving only "needs caution" and "needs intervention".
- In the final train dataset, rows containing multiple phrases such as "You should not" and "you should refrain from" have been filtered out. This is done in an attempt to reduce the number of refusals language models issue to the user, in order to create better, and more open models.
ProsocialDialog is a large-scale multi-turn English dialogue dataset to teach conversational agents to respond to problematic content.
**For more information on the source dataset, refer to the original official [huggingface](https://huggingface.co/datasets/allenai/prosocial-dialog) and [paper](https://arxiv.org/abs/2205.12688).**
Possible drawbacks:
- Some ending messages have been cut off. This is only of concern if you rely on the 'episode_done' indicator.
## Languages
English
## Additional Information
### Citation
```
@inproceedings{kim2022prosocialdialog,
title={ProsocialDialog: A Prosocial Backbone for Conversational Agents},
author={Hyunwoo Kim and Youngjae Yu and Liwei Jiang and Ximing Lu and Daniel Khashabi and Gunhee Kim and Yejin Choi and Maarten Sap},
booktitle={EMNLP},
year=2022
}
``` |
0x22almostEvil/russe-semantics-sim | 2023-05-17T15:43:59.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"semantics",
"region:us"
] | 0x22almostEvil | null | null | null | 0 | 3 | ---
license: mit
task_categories:
- text-classification
language:
- ru
tags:
- semantics
size_categories:
- 100K<n<1M
---
# Dataset Card for russe-semantics-sim with ~200K entries. Russian language.
### Dataset Summary
License: MIT. Contains CSV of a list of word1, word2, their `connection score` (are they synonymous or associations), type of connection.
### Original Datasets are available here:
- https://github.com/nlpub/russe-evaluation |
Gdot/clts | 2023-05-19T02:14:56.000Z | [
"task_categories:summarization",
"language:zh",
"region:us"
] | Gdot | null | null | null | 3 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 706157853
num_examples: 148317
- name: valid
num_bytes: 97794789
num_examples: 20393
- name: test
num_bytes: 78816630
num_examples: 16687
download_size: 593531838
dataset_size: 882769272
task_categories:
- summarization
language:
- zh
---
# Dataset Card for "clts"
[original link](https://github.com/lxj5957/CLTS-Dataset)
|
Pranavkpba2000/skin_cancer_small_dataset | 2023-05-16T11:12:18.000Z | [
"region:us"
] | Pranavkpba2000 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AK
'1': BCC
'2': BKL
'3': DF
'4': MEL
'5': NV
'6': SCC
'7': VASC
splits:
- name: train
num_bytes: 66578294.72
num_examples: 11360
- name: test
num_bytes: 17394813.72
num_examples: 2840
download_size: 83755065
dataset_size: 83973108.44
---
# Dataset Card for "skin_cancer_small_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
openllmplayground/pandagpt_visual_instruction_dataset | 2023-05-23T15:21:35.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | openllmplayground | null | null | null | 11 | 3 | ---
license: cc-by-nc-sa-4.0
---
**[Dataset Details]** This dataset is constructed by combining [LLaVA Visual Instruct 150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) and the [dataset](https://github.com/Vision-CAIR/MiniGPT-4/blob/main/dataset/README_2_STAGE.md) released by MiniGPT-4.
**[License]** Attribution-NonCommercial 4.0 International It should abide by the policy of OpenAI: [https://openai.com/policies/terms-of-use](https://openai.com/policies/terms-of-use)
## Intended use
**Primary intended uses**: The primary use of this dataset is research on large multimodal models and chatbots.
**Primary intended users**: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
KaraAgroAI/CADI-AI | 2023-06-09T12:36:22.000Z | [
"task_categories:object-detection",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-sa-4.0",
"object detection",
"vision",
"region:us"
] | KaraAgroAI | null | null | null | 2 | 3 | ---
license: cc-by-sa-4.0
task_categories:
- object-detection
language:
- en
tags:
- object detection
- vision
size_categories:
- 1K<n<10K
extra_gated_heading: "Acknowledge license to accept the repository"
extra_gated_button_content: "Acknowledge license"
extra_gated_fields:
I agree to attribute the creator of this repository: checkbox
---
---
## Cashew Disease Identication with Artificial Intelligence (CADI-AI) Dataset
This repository contains a comprehensive dataset of cashew images captured by drones, accompanied by meticulously annotated labels.
Each high-resolution image in the dataset has a resolution of 1600x1300 pixels, providing fine details for analysis and model training.
To facilitate efficient object detection, each image is paired with a corresponding text file in YOLO format.
The YOLO format file contains annotations, including class labels and bounding box coordinates.
### Dataset Labels
```
['abiotic', 'insect', 'disease']
```
### Number of Images
```json
{'train': 3788, 'valid': 710, 'test': 238}
```
### Number of Instances Annotated
```json
{'insect':1618, 'abiotic':13960, 'disease':7032}
```
### Folder structure after unzipping repective folders
```markdown
Data/
└── train/
├── images
├── labels
└── val/
├── images
├── labels
└── test/
├── images
├── labels
```
### Dataset Information
The dataset was created by a team of data scientists from the KaraAgro AI Foundation,
with support from agricultural scientists and officers.
The creation of this dataset was made possible through funding of the
Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) through their projects
[Market-Oriented Value Chains for Jobs & Growth in the ECOWAS Region (MOVE)](https://www.giz.de/en/worldwide/108524.html) and
[FAIR Forward - Artificial Intelligence for All](https://www.bmz-digital.global/en/overview-of-initiatives/fair-forward/), which GIZ implements on
behalf the German Federal Ministry for Economic Cooperation and Development (BMZ).
For detailed information regarding the dataset, we invite you to explore the accompanying datasheet available [here](https://drive.google.com/file/d/1viv-PtZC_j9S_K1mPl4R1lFRKxoFlR_M/view?usp=sharing).
This comprehensive resource offers a deeper understanding of the dataset's composition, variables, data collection methodologies, and other relevant details.
|
0x22almostEvil/ws-semantics-simnrel | 2023-05-20T09:35:49.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:ru",
"language:de",
"language:it",
"license:apache-2.0",
"semantics",
"arxiv:1508.00106",
"region:us"
] | 0x22almostEvil | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
- ru
- de
- it
tags:
- semantics
size_categories:
- 1K<n<10K
---
# Dataset Card for WS353-semantics-sim-and-rel with ~2K entries.
### Dataset Summary
License: Apache-2.0. Contains CSV of a list of word1, word2, their `connection score`, type of connection and language.
- ### Original Datasets are available here:
- https://leviants.com/multilingual-simlex999-and-wordsim353/
### Paper of original Dataset:
- https://arxiv.org/pdf/1508.00106v5.pdf |
Fraol/Py150-processed | 2023-05-19T23:58:41.000Z | [
"region:us"
] | Fraol | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: repository_path
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 726142896.0
num_examples: 120000
- name: val
num_bytes: 90767862.0
num_examples: 15000
- name: test
num_bytes: 90767862.0
num_examples: 15000
download_size: 343675742
dataset_size: 907678620.0
---
# Dataset Card for "Py150-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Dataset Creation
The original dataset is at https://www.sri.inf.ethz.ch/py150.
# Citation Information
@article{raychev2016probabilistic,
title={Probabilistic model for code with decision trees},
author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
journal={ACM SIGPLAN Notices},
volume={51},
number={10},
pages={731--747},
year={2016},
publisher={ACM New York, NY, USA}
} |
Yuchong/us-breast-cancer | 2023-05-17T23:40:34.000Z | [
"region:us"
] | Yuchong | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 42431652.0
num_examples: 130
download_size: 10004141
dataset_size: 42431652.0
---
# Dataset Card for "us-breast-cancer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nan-Do/instructional_code-search-net-ruby | 2023-05-20T05:25:23.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"Ruby",
"Code Generation",
"Instruction Response",
"region:us"
] | Nan-Do | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 30679722
num_examples: 51470
download_size: 12427089
dataset_size: 30679722
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- Ruby
- Code Generation
- Instruction Response
pretty_name: Instructional Ruby Dataset
---
# Dataset Card for "instructional_code-search-net-ruby"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-ruby
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for Ruby.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-ruby
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0 |
joey234/mmlu-astronomy | 2023-08-23T04:28:04.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 5110
num_examples: 5
- name: test
num_bytes: 764857
num_examples: 152
download_size: 95332
dataset_size: 769967
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-astronomy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_mathematics | 2023-08-23T04:31:20.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 6168
num_examples: 5
- name: test
num_bytes: 422940
num_examples: 100
download_size: 81860
dataset_size: 429108
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-college_mathematics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-college_physics | 2023-08-23T04:32:28.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 5777
num_examples: 5
- name: test
num_bytes: 391468
num_examples: 102
download_size: 80709
dataset_size: 397245
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-college_physics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-conceptual_physics | 2023-08-23T04:33:33.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 4101
num_examples: 5
- name: test
num_bytes: 618511
num_examples: 235
download_size: 85347
dataset_size: 622612
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-conceptual_physics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-formal_logic | 2023-08-23T04:35:43.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 5605
num_examples: 5
- name: test
num_bytes: 599410
num_examples: 126
download_size: 87495
dataset_size: 605015
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-formal_logic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-global_facts | 2023-08-23T04:36:13.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 4472
num_examples: 5
- name: test
num_bytes: 330022
num_examples: 100
download_size: 57281
dataset_size: 334494
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-global_facts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_computer_science | 2023-08-23T04:37:53.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 7186
num_examples: 5
- name: test
num_bytes: 551036
num_examples: 100
download_size: 100819
dataset_size: 558222
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-high_school_computer_science"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_mathematics | 2023-08-23T04:40:38.000Z | [
"region:us"
] | joey234 | null | null | null | 1 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 5543
num_examples: 5
- name: test
num_bytes: 951603
num_examples: 270
download_size: 127368
dataset_size: 957146
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-high_school_mathematics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-high_school_physics | 2023-08-23T04:41:43.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 5549
num_examples: 5
- name: test
num_bytes: 632618
num_examples: 151
download_size: 110066
dataset_size: 638167
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-high_school_physics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-miscellaneous | 2023-08-23T04:49:00.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 3496
num_examples: 5
- name: test
num_bytes: 1695944
num_examples: 783
download_size: 237552
dataset_size: 1699440
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-miscellaneous"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-moral_disputes | 2023-08-23T04:49:30.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 4935
num_examples: 5
- name: test
num_bytes: 1532082
num_examples: 346
download_size: 153575
dataset_size: 1537017
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-moral_disputes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-moral_scenarios | 2023-08-23T04:50:03.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 7379
num_examples: 5
- name: test
num_bytes: 4986899
num_examples: 895
download_size: 339959
dataset_size: 4994278
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-moral_scenarios"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Fredithefish/ShareGPT-Unfiltered-RedPajama-Chat-format | 2023-06-06T14:17:56.000Z | [
"license:apache-2.0",
"region:us"
] | Fredithefish | null | null | null | 4 | 3 | ---
license: apache-2.0
---
# ShareGPT unfiltered dataset in RedPajama-Chat format
This dataset was created by converting <a href="https://huggingface.co/datasets/Fredithefish/ShareGPT-unfiltered-alpaca-lora-format">The alpaca-lora formatted ShareGPT dataset</a> to the format required by RedPajama-Chat.<br>
This script was used for the conversion: https://github.com/fredi-python/Alpaca2INCITE-Dataset-Converter/blob/main/convert.py
WARNING: Only the first human and gpt text of each conversation from the original dataset is included in the dataset.
## The format
```{"text": "<human>: hello\n<bot>: Hello! How can I help you today?"}``` |
asoria/duorc | 2023-05-19T14:59:33.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"sourc... | asoria | DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie. | @inproceedings{DuoRC,
author = { Amrita Saha and Rahul Aralikatte and Mitesh M. Khapra and Karthik Sankaranarayanan},title = {{DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension}},
booktitle = {Meeting of the Association for Computational Linguistics (ACL)},
year = {2018}
} | null | 0 | 3 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
- text2text-generation
task_ids:
- abstractive-qa
- extractive-qa
paperswithcode_id: duorc
pretty_name: DuoRC
configs:
- ParaphraseRC
- SelfRC
dataset_info:
- config_name: SelfRC
features:
- name: plot_id
dtype: string
- name: plot
dtype: string
- name: title
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: no_answer
dtype: bool
splits:
- name: train
num_bytes: 239852925
num_examples: 60721
- name: validation
num_bytes: 51662575
num_examples: 12961
- name: test
num_bytes: 49142766
num_examples: 12559
download_size: 34462660
dataset_size: 340658266
- config_name: ParaphraseRC
features:
- name: plot_id
dtype: string
- name: plot
dtype: string
- name: title
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: no_answer
dtype: bool
splits:
- name: train
num_bytes: 496683105
num_examples: 69524
- name: validation
num_bytes: 106510545
num_examples: 15591
- name: test
num_bytes: 115215816
num_examples: 15857
download_size: 62921050
dataset_size: 718409466
---
# Dataset Card for duorc
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DuoRC](https://duorc.github.io/)
- **Repository:** [GitHub](https://github.com/duorc/duorc)
- **Paper:** [arXiv](https://arxiv.org/abs/1804.07927)
- **Leaderboard:** [DuoRC Leaderboard](https://duorc.github.io/#leaderboard)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and the answers are given based on corresponding IMDb movie plots.
### Supported Tasks and Leaderboards
- `abstractive-qa` : The dataset can be used to train a model for Abstractive Question Answering. An abstractive question answering model is presented with a passage and a question and is expected to generate a multi-word answer. The model performance is measured by exact-match and F1 score, similar to [SQuAD V1.1](https://huggingface.co/metrics/squad) or [SQuAD V2](https://huggingface.co/metrics/squad_v2). A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) may be used for this task.
- `extractive-qa`: The dataset can be used to train a model for Extractive Question Answering. An extractive question answering model is presented with a passage and a question and is expected to predict the start and end of the answer span in the passage. The model performance is measured by exact-match and F1 score, similar to [SQuAD V1.1](https://huggingface.co/metrics/squad) or [SQuAD V2](https://huggingface.co/metrics/squad_v2). [BertForQuestionAnswering](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering) or any other similar model may be used for this task.
### Languages
The text in the dataset is in English, as spoken by Wikipedia writers for movie plots. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
```
{'answers': ['They arrived by train.'], 'no_answer': False, 'plot': "200 years in the future, Mars has been colonized by a high-tech company.\nMelanie Ballard (Natasha Henstridge) arrives by train to a Mars mining camp which has cut all communication links with the company headquarters. She's not alone, as she is with a group of fellow police officers. They find the mining camp deserted except for a person in the prison, Desolation Williams (Ice Cube), who seems to laugh about them because they are all going to die. They were supposed to take Desolation to headquarters, but decide to explore first to find out what happened.They find a man inside an encapsulated mining car, who tells them not to open it. However, they do and he tries to kill them. One of the cops witnesses strange men with deep scarred and heavily tattooed faces killing the remaining survivors. The cops realise they need to leave the place fast.Desolation explains that the miners opened a kind of Martian construction in the soil which unleashed red dust. Those who breathed that dust became violent psychopaths who started to build weapons and kill the uninfected. They changed genetically, becoming distorted but much stronger.The cops and Desolation leave the prison with difficulty, and devise a plan to kill all the genetically modified ex-miners on the way out. However, the plan goes awry, and only Melanie and Desolation reach headquarters alive. Melanie realises that her bosses won't ever believe her. However, the red dust eventually arrives to headquarters, and Melanie and Desolation need to fight once again.", 'plot_id': '/m/03vyhn', 'question': 'How did the police arrive at the Mars mining camp?', 'question_id': 'b440de7d-9c3f-841c-eaec-a14bdff950d1', 'title': 'Ghosts of Mars'}
```
### Data Fields
- `plot_id`: a `string` feature containing the movie plot ID.
- `plot`: a `string` feature containing the movie plot text.
- `title`: a `string` feature containing the movie title.
- `question_id`: a `string` feature containing the question ID.
- `question`: a `string` feature containing the question text.
- `answers`: a `list` of `string` features containing list of answers.
- `no_answer`: a `bool` feature informing whether the question has no answer or not.
### Data Splits
The data is split into a training, dev and test set in such a way that the resulting sets contain 70%, 15%, and 15% of the total QA pairs and no QA pairs for any movie seen in train are included in the test set. The final split sizes are as follows:
Name Train Dec Test
SelfRC 60721 12961 12599
ParaphraseRC 69524 15591 15857
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
Wikipedia and IMDb movie plots
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
For SelfRC, the annotators were allowed to mark an answer span in the plot or synthesize their own answers after reading Wikipedia movie plots.
For ParaphraseRC, questions from the Wikipedia movie plots from SelfRC were used and the annotators were asked to answer based on IMDb movie plots.
#### Who are the annotators?
Amazon Mechanical Turk Workers
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was intially created by Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan in a collaborated work between IIT Madras and IBM Research.
### Licensing Information
[MIT License](https://github.com/duorc/duorc/blob/master/LICENSE)
### Citation Information
```
@inproceedings{DuoRC,
author = { Amrita Saha and Rahul Aralikatte and Mitesh M. Khapra and Karthik Sankaranarayanan},
title = {{DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension}},
booktitle = {Meeting of the Association for Computational Linguistics (ACL)},
year = {2018}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset. |
0x22almostEvil/semantics-ws-qna-oa | 2023-05-21T07:08:16.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:ru",
"language:de",
"language:it",
"license:apache-2.0",
"semantics",
"arxiv:1508.00106",
"region:us"
] | 0x22almostEvil | null | null | null | 0 | 3 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
- ru
- de
- it
tags:
- semantics
size_categories:
- 1K<n<10K
---
# Dataset Card for semantics-ws-qna-oa with ~2K entries.
### Dataset Summary
License: Apache-2.0. Contains parquet of INSTRUCTION, RESPONSE, SOURCE and METADATA.
- ### Original Datasets are available here:
- https://leviants.com/multilingual-simlex999-and-wordsim353/
### Paper of original Dataset:
- https://arxiv.org/pdf/1508.00106v5.pdf |
ztphs980/taptap_datasets | 2023-05-23T12:32:37.000Z | [
"language:en",
"license:mit",
"arxiv:2305.09696",
"region:us"
] | ztphs980 | null | null | null | 2 | 3 | ---
license: mit
language:
- en
---
This repository contains a total of 483 tabular datasets with meaningful column names collected from OpenML, UCI, and Kaggle platforms. The last column of each dataset is the label column. For more details, please refer to our paper https://arxiv.org/abs/2305.09696.
You can use the [code](https://github.com/ZhangTP1996/TapTap/blob/master/load_pretraining_datasets.py) to load all the datasets into a dictionary of pd.DataFrame.
An example script can be found below:
```python
from datasets import load_dataset
import pandas as pd
import numpy as np
data = {}
dataset = load_dataset(path='ztphs980/taptap_datasets')
dataset = dataset['train'].to_dict()
for table_name, table in zip(dataset['dataset_name'], dataset['table']):
table = pd.DataFrame.from_dict(eval(table, {'nan': np.nan}))
data[table_name] = table
``` |
ucalyptus/shrutilipi_bengali | 2023-05-20T21:26:05.000Z | [
"region:us"
] | ucalyptus | null | null | null | 3 | 3 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcriptions
dtype: string
splits:
- name: train
num_bytes: 78086461594.866
num_examples: 378691
download_size: 74356189780
dataset_size: 78086461594.866
---
# Dataset Card for "shrutilipi_bengali"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GitMylo/bark-semantic-training | 2023-05-21T09:19:58.000Z | [
"license:mit",
"region:us"
] | GitMylo | null | null | null | 3 | 3 | ---
license: mit
---
|
vkovenko/cross_domain_uk_reviews | 2023-05-21T14:49:09.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:uk",
"license:cc",
"region:us"
] | vkovenko | null | null | null | 0 | 3 | ---
license: cc
task_categories:
- text-classification
language:
- uk
size_categories:
- 100K<n<1M
---
The dataset is relevant to Ukrainian reviews in three different domains:
1) Hotels.
2) Reustarants.
3) Products.
The dataset is comrpised of several .csv files, which one can found useful:
1) processed_data.csv - the processed dataset itself.
2) train_val_test_indices.csv - csv file with train/val/test indices. The split was stratified w.r.t dataset name (hotels, reustarants, products) and rating.
3) bad_ids.csv - csv file with ids of bad samples marked using model filtering approach, only ids of those samples for which difference between actual and predicted rating is bigger than 2 points are maintained in this file.
The data is scrapped from Tripadvisor (https://www.tripadvisor.com/) and Rozetka (https://rozetka.com.ua/).
The dataset was initially used for extraction of key-phrases relevant to one of rating categories, based on trained machine learning model (future article link will be here).
Dataset is processed to include two additional columns: one with lemmatized tokens and another one with POS tags. Both lemmatization and POS tagging are done using pymorphy2 (https://pymorphy2.readthedocs.io/en/stable/) library.
The words are tokenized using a specific regex tokenizer to account for usage of apostroph.
Those reviews which weren't in Ukrainian were translated to it using Microsoft translator and re-checked manually afterwards.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.