id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
Nan-Do/reason_code-search-net-python | 2023-06-06T00:34:33.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"code",
"reasoning",
"Python",
"region:us"
] | Nan-Do | null | null | null | 5 | 29 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: TYPE
dtype: int64
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 399930143
num_examples: 429059
download_size: 89360217
dataset_size: 399930143
license: apache-2.0
task_categories:
- summarization
- text-generation
- conversational
- text2text-generation
language:
- en
tags:
- code
- reasoning
- Python
pretty_name: Reasoning dataset for Python
---
# Dataset Card for "reason_code-search-net-python"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/reason_code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is an instructional dataset for Python.
The dataset contains five different kind of tasks.
Given a Python 3 function:
- Type 1: Generate a summary explaining what it does. (For example: This function counts the number of objects stored in the jsonl file passed as input.)
- Type 2: Generate a summary explaining what its input parameters represent ("For example: infile: a file descriptor of a file containing json objects in jsonl format.")
- Type 3: Generate a summary explaining what the return value represents ("For example: The function returns the number of json objects in the file passed as input.")
- Type 4: Generate a summary explaining what is the type of the return value ("For example: The function returns an int.")
- Type 5: Generate a summary explaining what is the type of its input parameters ("For example: infile: A file descriptor.").
### Languages
The dataset is in English.
### Data Splits
There are no splits (Only training).
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the Python 3 reasoning/understanding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-python
### Annotations
The dataset includes an instruction, response and type columns.
The type colum indicates the type of task (from 1 to 5).
#### Annotation process
The annotation procedure was done using templates, NLP techniques to generate human-like questions and responses, and the Python AST module to parse the code.
The responses were generated parsing the docstrings of the functions. (The ones that included the required information).
### Licensing Information
Apache 2.0 |
napsternxg/nyt_ingredients | 2023-10-07T00:45:48.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"recipe",
"ingredients",
"region:us"
] | napsternxg | New York Times Ingredient Phrase Tagger Dataset
We use a conditional random field model (CRF) to extract tags from labelled training data, which was tagged by human news assistants.
e wrote about our approach on the [New York Times Open blog](http://open.blogs.nytimes.com/2015/04/09/extracting-structured-data-from-recipes-using-conditional-random-fields/).
This repo contains scripts to extract the Quantity, Unit, Name, and Comments from unstructured ingredient phrases.
We use it on Cooking to format incoming recipes. Given the following input:
```
1 pound carrots, young ones if possible
Kosher salt, to taste
2 tablespoons sherry vinegar
2 tablespoons honey
2 tablespoons extra-virgin olive oil
1 medium-size shallot, peeled and finely diced
1/2 teaspoon fresh thyme leaves, finely chopped
Black pepper, to taste
``` | @misc{nytimesTaggedIngredients,
author = {Erica Greene and Adam Mckaig},
title = {{O}ur {T}agged {I}ngredients {D}ata is {N}ow on {G}it{H}ub --- archive.nytimes.com},
howpublished = {\\url{https://archive.nytimes.com/open.blogs.nytimes.com/2016/04/27/structured-ingredients-data-tagging/}},
year = {},
note = {[Accessed 03-10-2023]},
} | null | 0 | 29 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: nyt_ingredients
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- recipe
- ingredients
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# New York Times Ingredient Phrase Tagger Dataset
Original source: https://github.com/nytimes/ingredient-phrase-tagger
From the source:
> We use a conditional random field model (CRF) to extract tags from labelled training data, which was tagged by human news assistants.
> We wrote about our approach on the [New York Times Open blog](http://open.blogs.nytimes.com/2015/04/09/extracting-structured-data-from-recipes-using-conditional-random-fields/).
> This repo contains scripts to extract the Quantity, Unit, Name, and Comments from unstructured ingredient phrases.
> We use it on Cooking to format incoming recipes. Given the following input:
```
1 pound carrots, young ones if possible
Kosher salt, to taste
2 tablespoons sherry vinegar
2 tablespoons honey
2 tablespoons extra-virgin olive oil
1 medium-size shallot, peeled and finely diced
1/2 teaspoon fresh thyme leaves, finely chopped
Black pepper, to taste
```
|
tomekkorbak/hh-rlhf_with_features | 2023-06-20T08:37:43.000Z | [
"region:us"
] | tomekkorbak | null | null | null | 0 | 29 | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: helpfulness_chosen
dtype: int64
- name: helpfulness_rejected
dtype: int64
- name: specificity_chosen
dtype: int64
- name: specificity_rejected
dtype: int64
- name: intent_chosen
dtype: int64
- name: intent_rejected
dtype: int64
- name: factuality_chosen
dtype: int64
- name: factuality_rejected
dtype: int64
- name: easy-to-understand_chosen
dtype: int64
- name: easy-to-understand_rejected
dtype: int64
- name: relevance_chosen
dtype: int64
- name: relevance_rejected
dtype: int64
- name: readability_chosen
dtype: int64
- name: readability_rejected
dtype: int64
- name: enough-detail_chosen
dtype: int64
- name: enough-detail_rejected
dtype: int64
- name: biased:_chosen
dtype: int64
- name: biased:_rejected
dtype: int64
- name: fail-to-consider-individual-preferences_chosen
dtype: int64
- name: fail-to-consider-individual-preferences_rejected
dtype: int64
- name: repetetive_chosen
dtype: int64
- name: repetetive_rejected
dtype: int64
- name: fail-to-consider-context_chosen
dtype: int64
- name: fail-to-consider-context_rejected
dtype: int64
- name: too-long_chosen
dtype: int64
- name: too-long_rejected
dtype: int64
splits:
- name: train
num_bytes: 17131073
num_examples: 19148
download_size: 8313539
dataset_size: 17131073
---
# Dataset Card for "hh-rlhf_with_features"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
moonmelonpizza/constitution_of_india | 2023-06-23T21:09:43.000Z | [
"license:unknown",
"region:us"
] | moonmelonpizza | null | null | null | 0 | 29 | ---
license: unknown
---
|
Sangeetha/Kaggle-LLM-Science-Exam | 2023-08-08T01:12:39.000Z | [
"license:apache-2.0",
"region:us"
] | Sangeetha | null | null | null | 0 | 29 | ---
license: apache-2.0
---
# Dataset Card for [LLM Science Exam Kaggle Competition]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
https://www.kaggle.com/competitions/kaggle-llm-science-exam/data
### Languages
[en, de, tl, it, es, fr, pt, id, pl, ro, so, ca, da, sw, hu, no, nl, et, af, hr, lv, sl]
## Dataset Structure
Columns
prompt - the text of the question being asked
A - option A; if this option is correct, then answer will be A
B - option B; if this option is correct, then answer will be B
C - option C; if this option is correct, then answer will be C
D - option D; if this option is correct, then answer will be D
E - option E; if this option is correct, then answer will be E
answer - the most correct answer, as defined by the generating LLM (one of A, B, C, D, or E).
### Data Fields
[Prompt, Options, Answer]
### Data Splits
Train: 6684 rowa
## Dataset Creation
All credits to Competition organizers. To answer difficult science-based questions written by a Large Language Model.
#### Who are the source language producers?
https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview
gpt3.5 clocks in at 175 billion parameters generated dataset
### Citation Information
All credist to: https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview and competiton participants who posted the curation dataset
### Contributions
Kaggle - LLM Science Exam Contributors
|
RobbeD/csgo-diffusion-maps-512 | 2023-08-08T07:22:38.000Z | [
"region:us"
] | RobbeD | null | null | null | 0 | 29 | ---
dataset_info:
features:
- name: finish_catalog
dtype: int64
- name: weapon
dtype: string
- name: description
dtype: string
- name: flavor_text
dtype: string
- name: finish_style
dtype: string
- name: skin
dtype: string
- name: mask_image
dtype: image
- name: conditioning_image
dtype: image
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 578102700.41
num_examples: 1145
download_size: 304017476
dataset_size: 578102700.41
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "csgo-diffusion-maps-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
botp/yentinglin-traditional_mandarin_instructions | 2023-08-16T06:59:16.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:cc-by-nc-4.0",
"arxiv:2305.13711",
"arxiv:2104.09864",
"region:us"
] | botp | null | null | null | 0 | 29 | ---
license: cc-by-nc-4.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- zh
pretty_name: Traditional Chinese Instruction-tuning Set
size_categories:
- 100K<n<1M
duplicated_from: yentinglin/traditional_mandarin_instructions
---
# Language Models for Taiwanese Culture
<p align="center">
✍️ <a href="https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2" target="_blank">Online Demo</a>
•
🤗 <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a>
• 👨️ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a>
<br/><br/>
<img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE">
<img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a>
<br/>
</p>
## Overview
Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications.
**Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin.
## Demo
A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2).
## Key Features
1. **Traditional Mandarin Support**: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications.
2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses.
3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture.
4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo.
## Work in progress
- [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture.
- [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k.
## Taiwanese Culture Examples
| Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat |
|:-|:-|:-|:-|:-|
| 以惡地景觀著名的是高雄哪一個景點? | 以惡地地景著名的是高雄市的旗津。 | 高雄以擁有惡地景觀著名的景點是田寮月世界。✅ | 以惡地景觀著名的是高雄市的旗津島。旗津島是一個小島,... | 高雄市內有多個以惡地景觀著名的景點,其中以下面幾個最為著名: 1. 高雄捷運紅線沿途景點:高雄市政府為了推廣高雄 |
| 唐鳳是誰? | 唐鳳(Audrey Tang)是台灣的政治家、社會活動家和軟體工程師。她於2016年擔任台灣行政院數位政務委員,是台灣政府歷史上首位沒有政黨背景的政務官。唐鳳在軟體工程領域有豐富的經驗。 ✅ | 唐鳳是政治評論家、資訊社會學研究者,也是台灣第一位擔任政府數位政委的人。 ✅ | 唐鳳(Tang Feng)是一位著名的中國演員和歌手。他出生於1974年12月17日,出生地點在中國北京。唐鳳在1995年開始他的演藝生涯,並在同年發行了他的首張專輯《我不孤單》。此後,他發行了許多受歡迎的歌曲,並在多部電視劇和電影中擔任主角。唐鳳因其精湛的演技和迷人的歌聲而被譽為“亞洲魅力男王”。他的音樂獲得了多個獎項,包括“華表最佳新男演唱獎”、“華表最佳歌曲獎”和“華表最佳舞蹈獎”等。 | 唐鳳是唐朝時期的一位著名文學家,他的代表作品包括《陰騭》、《狂凡子》等。他的文學風格具有強烈的個性和創意,被譽為「唐代文學之父」。 |
| NTU 在哪? | NTU位於新加坡,具體地址是50 Nanyang Avenue, Singapore 639798。 | NTU 是國立臺灣大學的縮寫。它是臺灣的一所頂尖研究型大學,位於臺北市。 ✅| NTU 是台灣的一所大學,位於台北市。✅ | NTU 的總部位於台灣新北市板橋區,佔地面積約為1,000公頃。|
| 甚麼是22K? | "22K" 可能指的是金的純度,尤其是在珠寶和金飾方面。 | "22K"是指台灣的大學畢業生的起薪,大約是新台幣22,000元。這個詞被用來形容大學畢業生的起薪偏低,並且在就業市場上對於大學畢業生的需求不足。✅| 22K是指溫度尺度上的22,000K(千度)。1K代表一度,即溫度差距,是攝氏溫標(攝氏度)的一百萬分之一。因此,22K等於-253.15°C(攝氏溫標)或-423.67°F(華氏溫標)。 | 22K是一種簡寫,即22,000。它是一個數字,表示22,000個單位。|
| 肉圓要不要加香菜? | 這是個個人口味的問題,有些人喜歡在肉圓中加入香菜,因為香菜的清香可以增添風味;而有些人則不喜歡香菜的味道,覺得它會蓋過其他食材的味道。所以要不要加香菜,可以根據個人喜好來決定。 | 這是一個個人喜好的問題,有些人喜歡在肉圓裡加香菜,有些人不喜歡。**在台灣**,肉圓加香菜是一種常見的做法,這是一種在肉圓上加香菜的做法。然而,在其他國這是一個文化差異的問題。😊 | 肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。香菜通常在製作過程中炒香,可以增添一些芳香和美味的味道。如果你想要加入香菜,可以在製作過程中炒香一些香菜,然後將其混入肉圓中。肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。| 加香菜的肉圓味道更好,但是不加香菜的肉圓也很好吃。如果你喜歡香菜的味道,那麼加上香菜就好了。如果你不喜歡香菜,那麼不加也不會影響肉 |
## Model
We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2:
| **Model** | **13B** |
|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> |
| Taiwan-LLaMa v0.9 (partial instruction set) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> |
| Taiwan-LLaMa v0.0 (no Traditional Mandarin pretraining) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> |
## Data
Here are some quick links to the datasets that we used to train the models:
| **Dataset** | **Link** |
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Instruction-tuning** | 🤗 <a href="https://huggingface.co/datasets/yentinglin/traditional_mandarin_instructions" target="_blank">yentinglin/traditional_mandarin_instructions</a> |
| Traditional Mandarin Pretraining | 🤗 <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> |
## Architecture
Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16.
It includes:
* Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin.
* Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses.
## Generic Capabilities on Vicuna Benchmark
The data is translated into traditional mandarin for evaluating the general capability.
<img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700">
The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT.
| Language Model | Relative Score (%) |
|-------------------------------------|--------------------|
| GPT-4 | 102.59% |
| ChatGPT | 100.00% |
| **Taiwan-LLaMa v1.0** | 76.76% |
| Claude-Instant-1.2 | 74.04% |
| Llama2_Traditional_Chinese_13b_Chat | 56.21% |
## How to deploy the model on my own machine?
We recommend hosting models with [🤗 Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations.
```bash
bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN
```
Prompt format follows vicuna-v1.1 template:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT:
```
## Setup development environment
```bash
conda create -n taiwan-llama python=3.10 -y
conda activate taiwan-llama
pip install -r requirements.txt
```
## Citations
If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry:
```bibtex
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
```
## Collaborate With Us
If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors.
## License
The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details.
## OpenAI Data Acknowledgment
The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations.
## Acknowledgements
We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models.
|
harouzie/vi_en-translation | 2023-08-19T03:21:28.000Z | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:vi",
"language:en",
"license:mit",
"region:us"
] | harouzie | null | null | null | 0 | 29 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: English
dtype: string
- name: Vietnamese
dtype: string
splits:
- name: train
num_bytes: 17912544
num_examples: 203272
- name: test
num_bytes: 2239068
num_examples: 25409
- name: valid
num_bytes: 2239068
num_examples: 25409
download_size: 14189665
dataset_size: 22390680
task_categories:
- translation
language:
- vi
- en
pretty_name: vietnamese-english-250k-translation
size_categories:
- 100K<n<1M
--- |
F1nn21/playing-cards | 2023-08-26T22:48:51.000Z | [
"task_categories:image-classification",
"language:en",
"license:other",
"playing cards",
"cards",
"region:us"
] | F1nn21 | null | null | null | 1 | 29 | ---
license: other
task_categories:
- image-classification
language:
- en
tags:
- playing cards
- cards
--- |
StudentLLM/Open-Wyvern-74k | 2023-09-06T00:24:42.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | StudentLLM | null | null | null | 2 | 29 | ---
task_categories:
- text-classification
- question-answering
- summarization
- conversational
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
<p align="center"><img src="https://cdn-uploads.huggingface.co/production/uploads/63e087b6a98d931aa90c1b9c/jm4fCY9DMGDxDRyhIeDZh.jpeg"></p>
# The Wyvern 🐉 Dataset
Let's introduce the **Wyvern 🐉** dataset, the new combination of datasets([Open-Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca),
[Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), [airoboros](https://huggingface.co/datasets/jondurbin/airoboros-2.1),
[Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k))!
We have integrated high-quality datasets following the claim that quality is more matter than quantity.
In addition, we have deduplicated the duplication of datasets to improve the dataset's quality because each dataset has some data contaminations.
Please see below for more details about the dataset!
# Dataset Details
**Wyvern 🐉** dataset is mixture of several datasets(Open-Orca, Open-Platypus, airoboros, Dolly) as mentioned above.
The specific configuration of the dataset is as follows.
(Open-Orca GPT-4 answered dataset was sampled using stratified sampling)
- **Open-Platypus(100%) + airoboros(100%) + Open-Orca(GPT-4)(5%)(stratified sampled) + Dolly-15k(100%)**
|Dataset Name|Sampled Size(ratio)|Deduped Size|License Type|
|---|---|---|---|
|[Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)|24.9k(100%)|16.8k|None|
|[airoboros](https://huggingface.co/datasets/jondurbin/airoboros-2.1)|36.3k(100%)|11k|apache-2.0|
|[Open-Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca)|999.9k → 49.7k(5%)|35.6k|MIT|
|[Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)|15k(100%)|11k|cc-by-sa-3.0|
After the deduplication process, the size of the combination dataset is changed from 125k to 74k! (125k → 74k)
# Data Deduplication
We referred to Open-Platypus's [data similarity check code](https://github.com/arielnlee/Platypus/blob/main/data_pipeline/data_similarity.ipynb) to deduplicate the duplicated data.
The specific code for deduplication will be uploaded soon!
# Citations
```
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
``` |
chats-bug/agent_action_plan | 2023-09-04T11:22:49.000Z | [
"region:us"
] | chats-bug | null | null | null | 3 | 29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 2487201.95821727
num_examples: 861
- name: test
num_bytes: 623967.0417827298
num_examples: 216
download_size: 0
dataset_size: 3111169.0
---
# Dataset Card for "agent_action_plan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pierre-pessarossi/tiny_shakespeare_dialogue | 2023-09-05T09:59:52.000Z | [
"region:us"
] | pierre-pessarossi | null | null | null | 0 | 29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2798654
num_examples: 6281
- name: validation
num_bytes: 166728
num_examples: 439
- name: test
num_bytes: 115868
num_examples: 498
download_size: 957486
dataset_size: 3081250
---
# Dataset Card for "tiny_shakespeare_dialogue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hahaday2022/LinMinMei | 2023-09-06T10:12:02.000Z | [
"region:us"
] | hahaday2022 | null | null | null | 0 | 29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 15940192.0
num_examples: 48
download_size: 15604646
dataset_size: 15940192.0
---
# Dataset Card for "LinMinMei"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FanChen0116/bus_few4_40x_pvi | 2023-09-26T19:53:15.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 29 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-from_location
'2': B-from_location
'3': B-leaving_date
'4': I-leaving_date
'5': I-to_location
'6': B-to_location
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 345681
num_examples: 1400
- name: validation
num_bytes: 6900
num_examples: 35
- name: test
num_bytes: 70618
num_examples: 377
download_size: 45026
dataset_size: 423199
---
# Dataset Card for "bus_few4_40x_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huangyt/FINETUNE3 | 2023-09-11T13:42:55.000Z | [
"license:openrail",
"region:us"
] | huangyt | null | null | null | 0 | 29 | ---
license: openrail
---

# 📔 **DATASET**
| **Dataset** | Class | Number of Questions |
| ------- | ----------------------------------------------------------------- | ------------------------ |
| **FLAN_CoT(zs)** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense | 8000 |
| **Prm800k** | Reasoning 、 MATH | 6713 |
| **ScienceQA** | ScienceQA | 5177 |
| **SciBench** | ScienceQA | 695 |
| **ReClor** | Reasoning | 1624 |
| **TheoremQA** | Commonsense 、 MATH 、 ScienceQA | 800 |
| **OpenBookQA** | Text_Understanding 、 Reasoning 、 Commonsense 、 ScienceQA | 5957 |
| **ARB** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense 、 Text_Understanding | 605 |
| **Openassistant-guanaco** | Commonsense 、 Text_Understanding 、 Reasoning | 802 |
| **SAT** | Text_Understanding 、 Reasoning 、 MATH | 426 |
| **GRE、GMAT** | Reasoning 、 MATH | 254 |
| **AMC、AIME** | Reasoning 、 MATH | 1000 |
| **LSAT** | Reasoning 、 LAW | 1009 |
# 📌 **Methon**
## *Improving the dataset*
Based on the content of the "Textbooks are all you need" paper, We want to try fine-tuning using advanced questions.
## *Dataset Format Definition*
Use "instruction、input、output" tend to lean towards guided datasets. In this format, each sample includes an instruction, an input, and an expected output. The instruction provides guidance on how to process the input to generate the output. This format of dataset is often used to train models to perform specific tasks, as they explicitly indicate the operations the model should perform.
```
{
"input": "",
"output": "",
"instruction": ""
}
```
- ### [FLAN_V2 COT(ZS)](https://huggingface.co/datasets/conceptofmind/cot_submix_original/tree/main)
We only extract the 'zs_opt' from COT and categorize each task.
- ### SAT、GRE、GMAT、AMC、AIME、LSAT
We will configure the input for datasets such as GRE, GMAT, SAT etc. as "Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation." Meanwhile, for the math input, it will be set as "Please provide the answer along with a corresponding explanation based on the given question." Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique.
Furthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output.
Lastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output.
- ### [OTHER](https://github.com/arielnlee/Platypus/tree/main/data_pipeline)
Prm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.
## *Sampling Algorithms*
Since the flan_v2 cot dataset includes tasks like:
- cot_esnli
- cot_strategyqa
- cot_qasc
- stream_qed
- cot_gsm8k
- cot_ecqa
- cot_creak
- stream_aqua
To ensure this dataset contains diverse high-quality data, we first select zs_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling.
```py
import json
import random
with open("cot_ORIGINAL.json", "r") as f:
abc = json.load(f)
# --- part1 ---
zsopt_data = [] # "zs_opt"
for i in abc :
if i["template_type"] == "zs_opt":
zsopt_data.append(i)
# --- part2 ---
output_lengths = [len(i["targets"]) for i in zsopt_data]
average_length = sum(output_lengths) / len(output_lengths) # average length
filtered_data = []
for a in zsopt_data:
if len(a["targets"]) >= average_length:
filtered_data.append(a) # output length need to >= average_length
class_counts = {} # Count the number of samples for each class
for a in filtered_data:
task_name = a["task_name"]
if task_name in class_counts:
class_counts[task_name] += 1
else:
class_counts[task_name] = 1
# --- part3 ---
total_samples = 8000 # we plan to select a total of 8000 samples
sample_ratios = {}
for task_name, count in class_counts.items():
sample_ratios[task_name] = count / len(filtered_data)
sample_sizes = {}
for task_name, sample_ratio in sample_ratios.items():
sample_sizes[task_name] = round(sample_ratio * total_samples)
stratified_samples = {} # Perform stratified sampling for each class
for task_name, sample_size in sample_sizes.items():
class_samples = []
for data in filtered_data:
if data["task_name"] == task_name:
class_samples.append(data)
selected_samples = random.sample(class_samples, sample_size)
stratified_samples[task_name] = selected_samples
final_samples = [] # Convert to the specified format
for task_name, samples in stratified_samples.items():
for sample in samples:
final_samples.append(
{
"input": "", # use ""
"output": sample["targets"], # output
"instruction": sample["inputs"], # question
}
)
with open("cot_change.json", "w") as f:
json.dump(final_samples, f, indent=2)
```
LSAT arranged according to LEVEL
```py
import json
with open("math-json.json", "r", encoding="utf-8") as f:
data_list = json.load(f)
sorted_data = sorted(data_list, key=lambda x: x["other"]["level"])
output_data = [
{
"input": "Please provide the answer along with a corresponding explanation based on the given question.",
"output": f"{item['answer']},solution:{item['other']['solution']}",
"instruction": item["question"],
}
for item in sorted_data
]
with open("math_convert.json", "w", encoding="utf-8") as output_file:
json.dump(output_data, output_file, ensure_ascii=False, indent=4)
``` |
AnandhDhakshinesh/FirstDataset | 2023-09-13T13:38:50.000Z | [
"region:us"
] | AnandhDhakshinesh | null | null | null | 0 | 29 | Entry not found |
ic-fspml/stock_news_sentiment_instructions_format | 2023-09-15T12:20:32.000Z | [
"region:us"
] | ic-fspml | null | null | null | 1 | 29 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 96060614
num_examples: 200998
- name: validation
num_bytes: 9605493
num_examples: 20100
- name: test
num_bytes: 14402769
num_examples: 30150
download_size: 20619051
dataset_size: 120068876
---
# Dataset Card for "stock_news_sentiment_instructions_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hack90/virus_dna_dedup_minihash_0.9_kmer_7 | 2023-09-19T18:17:54.000Z | [
"region:us"
] | Hack90 | null | null | null | 0 | 29 | ---
dataset_info:
features:
- name: sequence_x
dtype: string
- name: similarity_filter
dtype: float64
- name: id
dtype: string
- name: sequence_y
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
- name: missing_seq_count
dtype: int64
- name: missingness
dtype: float64
- name: seq_filled
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 314286490
num_examples: 10885
download_size: 146460942
dataset_size: 314286490
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "virus_dna_dedup_minihash_0.9_kmer_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hdeldar/Persian-Text-llama2-1k | 2023-09-17T14:53:05.000Z | [
"region:us"
] | hdeldar | null | null | null | 0 | 29 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1830325
num_examples: 1000
download_size: 1841325
dataset_size: 1830325
dataset_name: json
configs:
- config_name: default
data_files:
- split: train
path: data/data-*
---
# Persian-Text-QA: Lazy Llama 2 Formatting
This is a subset (1k samples) of the [`SeyedAli/Persian-Text-QA`](https://huggingface.co/datasets/SeyedAli/Persian-Text-QA) dataset, processed to match Llama 2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). It was created using the following [colab notebook](https://colab.research.google.com/drive/1Ad7a9zMmkxuXTOh1Z7-rNSICA4dybpM2?usp=sharing).
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for [this article](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) about fine-tuning a Llama 2 (chat) model in a Google Colab.
|
ricardosantoss/primary_icd | 2023-09-22T21:05:37.000Z | [
"region:us"
] | ricardosantoss | null | null | null | 0 | 29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: TEXT
dtype: string
- name: ICD9_CODE
sequence: string
splits:
- name: train
num_bytes: 390398482
num_examples: 38701
- name: test
num_bytes: 50879443
num_examples: 5000
- name: validation
num_bytes: 50320021
num_examples: 5000
download_size: 258595856
dataset_size: 491597946
---
# Dataset Card for "primary_icd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Duxiaoman-DI/FinCorpus | 2023-09-22T10:10:10.000Z | [
"size_categories:10M<n<100M",
"language:zh",
"license:apache-2.0",
"finance",
"region:us"
] | Duxiaoman-DI | null | null | null | 21 | 29 | ---
license: apache-2.0
language:
- zh
tags:
- finance
size_categories:
- 10M<n<100M
---
中文金融资讯数据集,包括(压缩前):
- 上市公司公告 announcement_data.jsonl 20G
- 金融资讯/新闻
- fin_news_data.jsonl 30G
- fin_articles_data.jsonl 10G
- 金融试题 fin_exam.jsonl 370M
数据格式:
```
{
"text": <文本内容>,
"meta": {
"source": <数据来源>
}
}
``` |
yzhuang/autotree_automl_electricity_dim7_sd0 | 2023-09-29T22:58:59.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 29 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence: float32
- name: input_y
sequence: float32
splits:
- name: train
num_bytes: 1400412
num_examples: 26931
- name: validation
num_bytes: 600236
num_examples: 11543
download_size: 1231734
dataset_size: 2000648
---
# Dataset Card for "autotree_automl_electricity_dim7_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mrabhi0505/rule_code | 2023-09-29T11:32:48.000Z | [
"region:us"
] | mrabhi0505 | null | null | null | 0 | 29 | Entry not found |
polm-stability/xwinograd-ja | 2023-10-06T08:34:15.000Z | [
"license:cc-by-4.0",
"arxiv:2211.01786",
"arxiv:2106.12066",
"region:us"
] | polm-stability | null | null | null | 0 | 29 | ---
license: cc-by-4.0
---
This is the Japanese portion of the xwinograd dataset, formatted for easy use.
The original data can be found [here](https://huggingface.co/datasets/Muennighoff/xwinograd). When using this data, please cite the original papers.
```
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{tikhonov2021heads,
title={It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning},
author={Alexey Tikhonov and Max Ryabinin},
year={2021},
eprint={2106.12066},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Sajjad-Sh33/val_ds | 2023-10-06T11:40:37.000Z | [
"region:us"
] | Sajjad-Sh33 | null | null | null | 0 | 29 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: validation
num_bytes: 1300317226.53
num_examples: 8515
download_size: 1325144616
dataset_size: 1300317226.53
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "val_ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lighteval/siqa | 2023-10-07T08:03:32.000Z | [
"region:us"
] | lighteval | null | null | null | 1 | 29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answerA
dtype: string
- name: answerB
dtype: string
- name: answerC
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 6327209
num_examples: 33410
- name: validation
num_bytes: 372815
num_examples: 1954
download_size: 3678635
dataset_size: 6700024
---
# Dataset Card for "siqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
crawl_domain | 2022-11-18T19:47:14.000Z | [
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-Common-Crawl",
"source_datasets:original",... | null | Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries (e.g. "commoncrawl" to "common crawl"). Breaking domain names such as "openresearch" into component words "open" and "research" is important for applications such as Text-to-Speech synthesis and web search. Common Crawl is an open repository of web crawl data that can be accessed and analyzed by anyone. Specifically, we scraped the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. "OpenBSD"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. "NASA"), so we had to manually annotate the data. The dataset is stored as plaintext file where each line is an example of space separated segments of a domain name. The examples are stored in their original letter casing, but harder and more interesting examples can be generated by lowercasing the input first. | @inproceedings{zrs2020urlsegmentation,
title={Semi-supervised URL Segmentation with Recurrent Neural Networks Pre-trained on Knowledge Graph Entities},
author={Hao Zhang and Jae Ro and Richard William Sproat},
booktitle={The 28th International Conference on Computational Linguistics (COLING 2020)},
year={2020}
} | null | 0 | 28 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-Common-Crawl
- original
task_categories:
- other
task_ids: []
paperswithcode_id: common-crawl-domain-names
pretty_name: Common Crawl Domain Names
tags:
- web-search
- text-to-speech
dataset_info:
features:
- name: example
dtype: string
splits:
- name: train
num_bytes: 321134
num_examples: 17572
- name: test
num_bytes: 39712
num_examples: 2170
- name: validation
num_bytes: 36018
num_examples: 1953
download_size: 331763
dataset_size: 396864
---
# Dataset Card for Common Crawl Domain Names
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/google-research-datasets/common-crawl-domain-names
- **Repository:** https://github.com/google-research-datasets/common-crawl-domain-names
- **Paper:** https://arxiv.org/pdf/2011.03138
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries (e.g. "commoncrawl" to "common crawl").
Breaking [domain names](https://developer.mozilla.org/en-US/docs/Learn/Common_questions/What_is_a_URL) such as "openresearch" into component words "open" and "research" is important for applications such as Text-to-Speech synthesis and web search. [Common Crawl](https://commoncrawl.org/) is an open repository of web crawl data that can be accessed and analyzed by anyone. Specifically, we scraped the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. "OpenBSD"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. "NASA"), so we had to manually annotate the data.
### Supported Tasks and Leaderboards
- Text-to-Speech synthesis
- Web search
### Languages
en: English
## Dataset Structure
### Data Instances
Each sample is an example of space separated segments of a domain name. The examples are stored in their original letter casing, but harder and more interesting examples can be generated by lowercasing the input first.
For example:
```
Open B S D
NASA
ASAP Workouts
```
### Data Fields
- `example`: a `string` feature: space separated segments of a domain name.
### Data Splits
| split | size | trivial | avg_input_length | avg_segments |
|-------|-------|---------|------------------|--------------|
| train | 17572 | 13718 | 12.63 | 2.65 |
| eval | 1953 | 1536 | 12.77 | 2.67 |
| test | 2170 | 1714 | 12.63 | 2.66 |
## Dataset Creation
### Curation Rationale
The dataset was curated by scraping the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. "OpenBSD"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. "NASA"), so the curators of the dataset had to manually annotate the data.
### Source Data
#### Initial Data Collection and Normalization
Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The annotators are the curators of this dataset
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The curators of this dataset are [Jae Hun Ro](https://github.com/JaeHunRo) and [mwurts4google](https://github.com/mwurts4google), who are the contributors of the official Github repository for this dataset. Since the account handles of other curators are unknown currently, the authors of the paper linked to this dataset is mentioned here as curators, [Hao Zhang](https://arxiv.org/search/cs?searchtype=author&query=Zhang%2C+H), [Jae Ro](https://arxiv.org/search/cs?searchtype=author&query=Ro%2C+J), and [Richard Sproat](https://arxiv.org/search/cs?searchtype=author&query=Sproat%2C+R).
### Licensing Information
[MIT License](https://github.com/google-research-datasets/common-crawl-domain-names/blob/master/LICENSE)
### Citation Information
```
@inproceedings{zrs2020urlsegmentation,
title={Semi-supervised URL Segmentation with Recurrent Neural Networks Pre-trained on Knowledge Graph Entities},
author={Hao Zhang and Jae Ro and Richard William Sproat},
booktitle={The 28th International Conference on Computational Linguistics (COLING 2020)},
year={2020}
}
```
### Contributions
Thanks to [@Karthik-Bhaskar](https://github.com/Karthik-Bhaskar) for adding this dataset. |
newsph_nli | 2023-01-25T14:41:24.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:tl",
"license:unknown",
"arxiv:2010.11574",
"region... | null | First benchmark dataset for sentence entailment in the low-resource Filipino language.
Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs,
in 70-15-15 split for training, validation, and testing. | @article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
} | null | 0 | 28 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- tl
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
paperswithcode_id: newsph-nli
pretty_name: NewsPH NLI
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 154510599
num_examples: 420000
- name: test
num_bytes: 3283665
num_examples: 9000
- name: validation
num_bytes: 33015530
num_examples: 90000
download_size: 76565287
dataset_size: 190809794
---
# Dataset Card for NewsPH NLI
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NewsPH NLI homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Repository:** [NewsPH NLI repository](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Paper:** [Arxiv paper](https://arxiv.org/pdf/2010.11574.pdf)
- **Leaderboard:**
- **Point of Contact:** [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Dataset Summary
First benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains news articles in Filipino (Tagalog) scraped rom all major Philippine news sites online.
## Dataset Structure
### Data Instances
Sample data:
{
"premise": "Alam ba ninyo ang ginawa ni Erap na noon ay lasing na lasing na rin?",
"hypothesis": "Ininom niya ang alak na pinagpulbusan!",
"label": "0"
}
### Data Fields
[More Information Needed]
### Data Splits
Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
## Dataset Creation
### Curation Rationale
We propose the use of news articles for automatically creating benchmark datasets for NLI because of two reasons. First, news articles commonly use single-sentence paragraphing, meaning every paragraph in a news article is limited to a single sentence. Second, straight news articles follow the “inverted pyramid” structure, where every succeeding paragraph builds upon the premise of those that came before it, with the most important information on top and the least important towards the end.
### Source Data
#### Initial Data Collection and Normalization
To create the dataset, we scrape news articles from all major Philippine news sites online. We collect a total of 229,571 straight news articles, which we then lightly preprocess to remove extraneous unicode characters and correct minimal misspellings. No further preprocessing is done to preserve information in the data.
#### Who are the source language producers?
The dataset was created by Jan Christian, Blaise Cruz, Jose Kristian Resabal, James Lin, Dan John Velasco, and Charibeth Cheng from De La Salle University and the University of the Philippines
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Jan Christian Blaise Cruz, Jose Kristian Resabal, James Lin, Dan John Velasco and Charibeth Cheng
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Christian Blaise Cruz] (mailto:jan_christian_cruz@dlsu.edu.ph)
### Licensing Information
[More Information Needed]
### Citation Information
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
### Contributions
Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset. |
blinoff/medical_qa_ru_data | 2022-07-02T06:24:13.000Z | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ru",
"license:unknown",
"region:us"
] | blinoff | This dataset contains 190,335 Russian Q&A posts from a medical related forum. | null | null | 6 | 28 | ---
annotations_creators: []
language_creators: []
language:
- ru
license:
- unknown
multilinguality:
- monolingual
pretty_name: Medical Q&A Russian Data
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
---
### Dataset Summary
This dataset contains 190,335 Russian Q&A posts from a medical related forum.
### Dataset Fields
* date: date and time of the asked question, like '26 Октября 2018, 08:30'
* categ: question category
* theme: question topic
* desc: question text
* ans: question answers separated with ';\n'
* spec10: if present, one of 10 medical specializations
|
german-nlp-group/german_common_crawl | 2023-10-03T14:50:28.000Z | [
"language:de",
"region:us"
] | german-nlp-group | German Only Extract from Common Crawl
This Dataset is for pretraining a German Language Model (Unsupervised) or tune a Multilingual Model specifically to German | @inproceedings{wenzek2020ccnet,
title={CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data},
author={Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Joulin, Armand and Grave, {\'E}douard},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={4003--4012},
year={2020}
} | null | 7 | 28 | ---
language:
- de
---
# Dataset Card for GermanCommonCrawl
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/German-NLP-Group/german-transformer-training
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** philipp.reissel@rwth-aachen.de
### Dataset Summary
German Only Extract from Common Crawl
Stats:
Total Size after Deduplication: 142 Mio Pages / 194 GB (Gzipped)
Total Size before Deduplcation: 263 Mio Pages / 392 GB (Gzipped)
### Supported Tasks and Leaderboards
This Dataset is for pretraining a German Language Model (Unsupervised).
### Languages
German only (Sometimes websites are partially in another Language). One can filter these out through the `language_score` attribute.
## Dataset Structure
### Data Instances
```
{'url': 'http://my-shop.ru/shop/books/545473.html',
'date_download': '2016-10-20T19:38:58Z',
'digest': 'sha1:F62EMGYLZDIKF4UL5JZYU47KWGGUBT7T',
'length': 1155,
'nlines': 4,
'source_domain': 'my-shop.ru',
'title': 'Grammatikalische Liebeslieder. Methodische Vorschläge',
'raw_content': 'Grammatikalische Liebeslieder. [....]',
'cc_segment': 'crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/wet/CC-MAIN-20161020183837-00354-ip-10-171-6-4.ec2.internal.warc.wet.gz',
'original_nlines': 99,
'original_length': 2672,
'language': 'de',
'language_score': 1.0,
'perplexity': 283.0,
'bucket': 'head'}"
```
### Data Fields
### Data Splits
Train only
## Dataset Creation
### Curation Rationale
Handling and Filtering of Common Crawl Data requires large scale Server Ressources at a location in the US (for downloading speed). The total computing time needed to create this dataset is above 100k CPU hours. To give others the opportunity to train models with this dataset easily we make it publicly available.
In most use cases you see an improved Model Performance when extending the pre-training Data so one can achieve highest accuracies as this is probably the largest available dataset.
### Source Data
It was filtered from the Common Crawl Snapshots of the following months:
1. 2015-48
2. 2016-18
3. 2016-44
4. 2017-33
5. 2017-30
6. 2017-30
7. 2017-39
8. 2017-51
9. 2018-09
10. 2018-17
11. 2018-30
12. 2018-39
13. 2018-51
14. 2019-09
15. 2019-18
16. 2019-30
17. 2019-47
18. 2020-10
#### Initial Data Collection and Normalization
Filtering and deduplication of each month seperalety was performed with [CC_Net](https://github.com/facebookresearch/cc_net). The current datasets only contains the best part (head part) with the highest text quality (see CC_Net Paper for more details). Middle and tail part may be uploaded soon as well, or are available on request.
Afterwards this Dataset was deduplicated again to filter out Websites which occur in multiple monthly snapshots. This deduplication removes all Websites which have either the same url or the same hash (this is to filter out websites which are accessible under multiple domains)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{wenzek2020ccnet,
title={CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data},
author={Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Joulin, Armand and Grave, {\'E}douard},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={4003--4012},
year={2020}
``` |
openclimatefix/goes-mrms | 2023-05-12T08:56:03.000Z | [
"region:us"
] | openclimatefix | \ | @InProceedings{noaa::goes-mrms,
title = {EUMETSAT SEVIRI RSS UK HRV},
author={EUMETSAT, with preparation by Open Climate Fix
},
year={2022}
} | null | 0 | 28 | # Dataset Card for Goes-MRMS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset is a combination of GOES-16 data and MRMS radar precipitation data to roughly match the unreleased dataset used to train Google Research's MetNet. In the papers they used GOES-16 satellite imagery, MultiRadar/Multi-System (MRMS) instantaneous precipitation, hourly cumulative precipitation, and High Resolution Rapid Refresh NWP initializations as inputs to predict future MRMS precipitation rates. The precipitation rates were binned into 0.2mm/hr bins to make the output a classification task, and allow for the models to predict a probability distribution over the region of interest.
Additionally, the input image patches are much larger than the target image patches. For MetNet, the input images covered 512x512 km area, while the target was the center 64x64 km crop. For MetNet-2 the input covered 2048x2048 km with the target being the central 512x512 km.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
MetNet (January 2018-July 2019) (16 days training, 2 days validation, 2 days test)
MetNet-2 (July 2017-August 2020) (Non-overlapping time ranges with 12 hour black outs in between)
Full (July 2017-January 2022) (Train: 2017-2020. except for first of the month, Validation: first of the month July 2017-2020, Test: 2021-2022)
## Dataset Creation
### Curation Rationale
The original curation rationale was for forecasting precipitation rate in a probabilistic way. This dataset covers a different time period than in the original paper, going from July 2017 through December 2021. There is a split available to match the temporal coverage of the original MetNet paper, (Janurary 2018 to July 2019) or the MetNet-2 paper (July 2017 to August 2020).
### Source Data
#### Initial Data Collection and Normalization
From the MetNet paper: "For both MRMS and GOES we acquired data for the period January 2018 through July 2019. We split the data temporally into three non-overlapping data sets by repeatedly using approximately 16 days for training followed by two days for validation and two days for testing. From these temporal splits we randomly extracted 13,717 test and validation samples and kept increasing the training set size until we observed no over-fitting at 1.72 million training samples."
From the MetNet-2 paper: "The training data consists of 1,230,585 patches of size 2048 km x 2048 km at the input and targets of size 512 km x 512 km including all 360 (2 to 720 minutes) time slices. The training area covers a region of 7000x2500 kilometers. We sample target patches from the input context region minus an all around border of 512 km. The input context is padded for all regions outside of the 7000x2500 CONUS. The validation data used for developing the models consists of 11,991 patches and the test data of 39,864 patches. The training, validation and test data are drawn from non-overlapping ranges of hours, with black out periods of 12 hours in between, over a period of observations of 3 years from July 2017 to August 2020. This ensures that the model does not learn any spurious training and evaluation correlations within any single day. HRRR only generates forecasts starting at full hours."
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Jacob Bieker (jacob@openclimatefix.org)
MetNet-1 split: MetNet Authors
MetNet-2 split: MetNet-2 Authors
### Licensing Information
All data is open and without restrictions from NOAA.
### Citation Information
Please cite NOAA as the data provider. |
turingbench/TuringBench | 2022-10-25T09:56:51.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:apache-2.0",
... | turingbench | This benchmark environment contains a dataset comprised of generated texts from pre-trained language models.
We also have two benchmark tasks - human vs. machine (i.e., binary classification) and authorship
attribution (i.e., multi-class classification). These benchmark tasks and dataset are hosted on the
TuringBench website with Leaderboards for each task. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 0 | 28 | ---
YAML tags:
annotations_creators:
- found
language_creators:
- found
- machine-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for [TuringBench]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/TuringBench/TuringBench
- **Repository:** https://github.com/TuringBench/TuringBench
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@TuringBench](https://github.com/TuringBench) for adding this dataset.
|
FabianWillner/triviaQARC | 2022-06-14T09:54:14.000Z | [
"region:us"
] | FabianWillner | null | null | null | 0 | 28 | Entry not found |
juletxara/tydiqa_xtreme | 2022-07-01T19:19:05.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:extended|wikipedia",
"language:en",
"language:ar",
"language:bn",
"language:fi",
"l... | juletxara | TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
the use of translation (unlike MLQA and XQuAD).
We also include "translate-train" and "translate-test" splits for each non-English languages from XTREME (Hu et al., 2020). These splits are the automatic translations from English to each target language used in the XTREME paper [https://arxiv.org/abs/2003.11080]. The "translate-train" split purposefully ignores the non-English TyDiQA-GoldP training data to simulate the transfer learning scenario where original-language data is not available and system builders must rely on labeled English data plus existing machine translation systems. | @article{tydiqa,
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year = {2020},
journal = {Transactions of the Association for Computational Linguistics}
} | null | 1 | 28 | ---
pretty_name: TyDi QA
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
- ar
- bn
- fi
- id
- ja
- sw
- ko
- ru
- te
- th
license:
- apache-2.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: tydi-qa
---
# Dataset Card for "tydiqa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3726.74 MB
- **Size of the generated dataset:** 5812.92 MB
- **Total amount of disk used:** 9539.67 MB
### Dataset Summary
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
the use of translation (unlike MLQA and XQuAD).
We also include "translate-train" and "translate-test" splits for each non-English languages from XTREME (Hu et al., 2020). These splits are the automatic translations from English to each target language used in the XTREME paper [https://arxiv.org/abs/2003.11080]. The "translate-train" split purposefully ignores the non-English TyDiQA-GoldP training data to simulate the transfer learning scenario where original-language data is not available and system builders must rely on labeled English data plus existing machine translation systems.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### primary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 5757.59 MB
- **Total amount of disk used:** 7620.96 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"annotations": {
"minimal_answers_end_byte": [-1, -1, -1],
"minimal_answers_start_byte": [-1, -1, -1],
"passage_answer_candidate_index": [-1, -1, -1],
"yes_no_answer": ["NONE", "NONE", "NONE"]
},
"document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...",
"document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร",
"document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...",
"language": "thai",
"passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...",
"question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..."
}
```
#### secondary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 55.34 MB
- **Total amount of disk used:** 1918.71 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [394],
"text": ["بطولتين"]
},
"context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...",
"id": "arabic-2387335860751143628-1",
"question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...",
"title": "قائمة نهائيات كأس العالم"
}
```
### Data Fields
The data fields are the same among all splits.
#### primary_task
- `passage_answer_candidates`: a dictionary feature containing:
- `plaintext_start_byte`: a `int32` feature.
- `plaintext_end_byte`: a `int32` feature.
- `question_text`: a `string` feature.
- `document_title`: a `string` feature.
- `language`: a `string` feature.
- `annotations`: a dictionary feature containing:
- `passage_answer_candidate_index`: a `int32` feature.
- `minimal_answers_start_byte`: a `int32` feature.
- `minimal_answers_end_byte`: a `int32` feature.
- `yes_no_answer`: a `string` feature.
- `document_plaintext`: a `string` feature.
- `document_url`: a `string` feature.
#### secondary_task
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | train | validation |
| -------------- | -----: | ---------: |
| primary_task | 166916 | 18670 |
| secondary_task | 49881 | 5077 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{tydiqa,
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year = {2020},
journal = {Transactions of the Association for Computational Linguistics}
}
```
```
@inproceedings{ruder-etal-2021-xtreme,
title = "{XTREME}-{R}: Towards More Challenging and Nuanced Multilingual Evaluation",
author = "Ruder, Sebastian and
Constant, Noah and
Botha, Jan and
Siddhant, Aditya and
Firat, Orhan and
Fu, Jinlan and
Liu, Pengfei and
Hu, Junjie and
Garrette, Dan and
Neubig, Graham and
Johnson, Melvin",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.802",
doi = "10.18653/v1/2021.emnlp-main.802",
pages = "10215--10245",
}
}
```
|
SetFit/wsc | 2022-06-10T13:59:09.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 28 | # Glue WSC
This dataset is a port of the official [`wsc` dataset](https://huggingface.co/datasets/super_glue) on the Hub.
Also, the test split is not labeled; the label column values are always -1.
|
tner/mit_movie_trivia | 2022-07-18T10:24:52.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"region:us"
] | tner | MIT Movie | null | null | 2 | 28 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: MIT Movie
---
# Dataset Card for "tner/mit_movie_trivia"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Dataset:** MIT Movie
- **Domain:** Movie
- **Number of Entity:** 12
### Dataset Summary
MIT Movie NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `Actor`, `Plot`, `Opinion`, `Award`, `Year`, `Genre`, `Origin`, `Director`, `Soundtrack`, `Relationship`, `Character_Name`, `Quote`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 13, 14, 0, 0, 0, 3, 4, 4, 4, 4, 4, 4, 4, 4],
'tokens': ['a', 'steven', 'spielberg', 'film', 'featuring', 'a', 'bluff', 'called', 'devil', 's', 'tower', 'and', 'a', 'spectacular', 'mothership']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/mit_movie_trivia/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-Actor": 1,
"I-Actor": 2,
"B-Plot": 3,
"I-Plot": 4,
"B-Opinion": 5,
"I-Opinion": 6,
"B-Award": 7,
"I-Award": 8,
"B-Year": 9,
"B-Genre": 10,
"B-Origin": 11,
"I-Origin": 12,
"B-Director": 13,
"I-Director": 14,
"I-Genre": 15,
"I-Year": 16,
"B-Soundtrack": 17,
"I-Soundtrack": 18,
"B-Relationship": 19,
"I-Relationship": 20,
"B-Character_Name": 21,
"I-Character_Name": 22,
"B-Quote": 23,
"I-Quote": 24
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|mit_movie_trivia |6816 | 1000| 1953|
|
allenai/wcep_sparse_max | 2022-11-24T15:03:54.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | null | 0 | 28 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8753 | 0.6443 | 0.5919 | 0.6588 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8706 | 0.6280 | 0.5988 | 0.6346 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8836 | 0.6658 | 0.6296 | 0.6746 | |
Harsit/xnli2.0_train_swahili | 2022-10-15T09:22:30.000Z | [
"region:us"
] | Harsit | null | null | null | 0 | 28 | Entry not found |
bigbio/cellfinder | 2022-12-22T15:44:19.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | bigbio | The CellFinder project aims to create a stem cell data repository by linking information from existing public databases and by performing text mining on the research literature. The first version of the corpus is composed of 10 full text documents containing more than 2,100 sentences, 65,000 tokens and 5,200 annotations for entities. The corpus has been annotated with six types of entities (anatomical parts, cell components, cell lines, cell types, genes/protein and species) with an overall inter-annotator agreement around 80%.
See: https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/ | @inproceedings{neves2012annotating,
title = {Annotating and evaluating text for stem cell research},
author = {Neves, Mariana and Damaschun, Alexander and Kurtz, Andreas and Leser, Ulf},
year = 2012,
booktitle = {
Proceedings of the Third Workshop on Building and Evaluation Resources for
Biomedical Text Mining\ (BioTxtM 2012) at Language Resources and Evaluation
(LREC). Istanbul, Turkey
},
pages = {16--23},
organization = {Citeseer}
} | null | 0 | 28 |
---
language:
- en
bigbio_language:
- English
license: cc-by-sa-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_SA_3p0
pretty_name: CellFinder
homepage: https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for CellFinder
## Dataset Description
- **Homepage:** https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The CellFinder project aims to create a stem cell data repository by linking information from existing public databases and by performing text mining on the research literature. The first version of the corpus is composed of 10 full text documents containing more than 2,100 sentences, 65,000 tokens and 5,200 annotations for entities. The corpus has been annotated with six types of entities (anatomical parts, cell components, cell lines, cell types, genes/protein and species) with an overall inter-annotator agreement around 80%.
See: https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/
## Citation Information
```
@inproceedings{neves2012annotating,
title = {Annotating and evaluating text for stem cell research},
author = {Neves, Mariana and Damaschun, Alexander and Kurtz, Andreas and Leser, Ulf},
year = 2012,
booktitle = {
Proceedings of the Third Workshop on Building and Evaluation Resources for
Biomedical Text Mining\ (BioTxtM 2012) at Language Resources and Evaluation
(LREC). Istanbul, Turkey
},
pages = {16--23},
organization = {Citeseer}
}
```
|
shjwudp/chinese-c4 | 2023-06-20T11:40:06.000Z | [
"language:zh",
"license:cc-by-4.0",
"region:us"
] | shjwudp | null | null | null | 11 | 28 | ---
license: cc-by-4.0
language:
- zh
---
## Introduction
Chinese-C4 is a clean Chinese internet dataset based on Common Crawl. The dataset is 46.29GB and has undergone multiple cleaning strategies, including Chinese filtering, heuristic cleaning based on punctuation, line-based hashing for deduplication, and repetition removal.
The dataset is open source and free for commercial use, and you are welcome to use the data and the cleaning strategies provided and contribute your cleaning strategies.
You can find the cleaning script for the dataset on GitHub [c4-dataset-script](https://github.com/shjwudp/c4-dataset-script).
|
Dahoas/code-review-instruct-critique-revision-python | 2023-01-08T15:22:19.000Z | [
"region:us"
] | Dahoas | null | null | null | 4 | 28 | Entry not found |
HeNLP/HeDC4 | 2023-04-24T06:04:29.000Z | [
"task_categories:fill-mask",
"size_categories:1B<n<10B",
"language:he",
"arxiv:2304.11077",
"region:us"
] | HeNLP | null | null | null | 3 | 28 | ---
task_categories:
- fill-mask
language:
- he
size_categories:
- 1B<n<10B
---
### Dataset Summary
A Hebrew Deduplicated and Cleaned Common Crawl Corpus. A thoroughly cleaned and
approximately deduplicated dataset for unsupervised learning.
### Citing
If you use HeDC4 in your research, please cite [HeRo: RoBERTa and Longformer Hebrew Language Models](http://arxiv.org/abs/2304.11077).
```
@article{shalumov2023hero,
title={HeRo: RoBERTa and Longformer Hebrew Language Models},
author={Vitaly Shalumov and Harel Haskey},
year={2023},
journal={arXiv:2304.11077},
}
``` |
keremberke/table-extraction | 2023-01-18T09:43:03.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Documents",
"region:us"
] | keremberke | null | \ | null | 7 | 28 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
- Documents
---
<div align="center">
<img width="640" alt="keremberke/table-extraction" src="https://huggingface.co/datasets/keremberke/table-extraction/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['bordered', 'borderless']
```
### Number of Images
```json
{'test': 34, 'train': 238, 'valid': 70}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/table-extraction", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2](https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2?ref=roboflow2huggingface)
### Citation
```
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 18, 2023 at 9:41 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 342 images.
Data-table are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
|
wwydmanski/blog-feedback | 2023-02-25T16:03:19.000Z | [
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"tabular",
"region:us"
] | wwydmanski | null | null | null | 0 | 28 | ---
task_categories:
- tabular-regression
- tabular-classification
tags:
- tabular
size_categories:
- 10K<n<100K
---
## Source
Source: [UCI](https://archive.ics.uci.edu/ml/datasets/BlogFeedback)
## Data Set Information:
This data originates from blog posts. The raw HTML-documents
of the blog posts were crawled and processed.
The prediction task associated with the data is the prediction
of the number of comments in the upcoming 24 hours. In order
to simulate this situation, we choose a basetime (in the past)
and select the blog posts that were published at most
72 hours before the selected base date/time. Then, we calculate
all the features of the selected blog posts from the information
that was available at the basetime, therefore each instance
corresponds to a blog post. The target is the number of
comments that the blog post received in the next 24 hours
relative to the basetime.
In the train data, the basetimes were in the years
2010 and 2011. In the test data the basetimes were
in February and March 2012. This simulates the real-world
situtation in which training data from the past is available
to predict events in the future.
The train data was generated from different basetimes that may
temporally overlap. Therefore, if you simply split the train
into disjoint partitions, the underlying time intervals may
overlap. Therefore, the you should use the provided, temporally
disjoint train and test splits in order to ensure that the
evaluation is fair.
## Attribute Information:
1...50:Average, standard deviation, min, max and median of them attributes 51...60 for the source of the current blog post. With source we mean the blog on which the post appeared.
For example, myblog.blog.org would be the source of the post myblog.blog.org/post_2010_09_10
51: Total number of comments before basetime
52: Number of comments in the last 24 hours before the
basetime
53: Let T1 denote the datetime 48 hours before basetime,
Let T2 denote the datetime 24 hours before basetime.
This attribute is the number of comments in the time period
between T1 and T2
54: Number of comments in the first 24 hours after the
publication of the blog post, but before basetime
55: The difference of Attribute 52 and Attribute 53
56...60:
The same features as the attributes 51...55, but
features 56...60 refer to the number of links (trackbacks),
while features 51...55 refer to the number of comments.
61: The length of time between the publication of the blog post
and basetime
62: The length of the blog post
63...262:
The 200 bag of words features for 200 frequent words of the
text of the blog post
263...269: binary indicator features (0 or 1) for the weekday
(Monday...Sunday) of the basetime
270...276: binary indicator features (0 or 1) for the weekday
(Monday...Sunday) of the date of publication of the blog
post
277: Number of parent pages: we consider a blog post P as a
parent of blog post B, if B is a reply (trackback) to
blog post P.
278...280:
Minimum, maximum, average number of comments that the
parents received
281: The target: the number of comments in the next 24 hours
(relative to basetime)
|
hazemessam/squad_v2 | 2023-03-06T15:18:07.000Z | [
"license:cc-by-4.0",
"region:us"
] | hazemessam | null | null | null | 0 | 28 | ---
license: cc-by-4.0
---
[SQuAD V2.0](https://rajpurkar.github.io/SQuAD-explorer/).
It contains the following:
* Train and Validation sets.
* The end position of each answer. |
Amitesh007/twitter_parsed_dataset | 2023-03-11T12:58:24.000Z | [
"region:us"
] | Amitesh007 | null | null | null | 0 | 28 | Entry not found |
bigcode/the-stack-github-issues | 2023-03-20T18:07:26.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en",
"region:us"
] | bigcode | null | null | null | 13 | 28 | ---
annotations_creators: []
language_creators:
- crowdsourced
language:
- en
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
extra_gated_prompt: |-
## Terms of Use for The Stack
The Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:
1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.
3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.
By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
---
## Dataset Description
This dataset contains conversations from GitHub issues and Pull Requests. Each conversation is comprised of a series of events, such as opening an issue, creating a comment,
or closing the issue, and includes the author's username, text, action, and identifiers such as the issue ID and number.
The dataset, which is mostly in English, has a total size of 54GB and 30.9M files.
## Dataset Structure
```python
from datasets import load_dataset
dataset = load_dataset("bigcode/the-stack-github-issues")
dataset
```
```
Dataset({
features: ['repo', 'issue_id', 'issue_number', 'pull_request', 'events', 'text_size', 'content', 'usernames'],
num_rows: 30982955
})
```
- `content` contains the full text in the conversation concatenated with special tokens: `<issue_start>` for the beginning of the issue, `<issue_comment>` before each comment
and `<issue_closed>` if a conversation is closed. Each comment is prepended with `username_{i}:` before the text, `username_{i}` is the mask for author `i`. This column
is intended for model training to avoid memorizing usernames, and understand the structure of the conversation.
- `events` contains the detailed events on top of which we built `content`, it also includes information the username's author and mask used.
Below is an example:
````
{'content': '<issue_start><issue_comment>Title: Click Save: Sorry, Cannot Write\n
'username_0: Hi all, Edit a file in Ice, click Save Icon\n Get error message: Sorry, cannot write /var/www/index.html
...
Edit: Also getting error: Cannot Zip Files up.\n
<issue_comment>username_1: hi there i have a similar problem. I cant save the files...',
'events': [{'action': 'opened',
'author': 'LaZyLion-ca',
'comment_id': None,
'datetime': '2013-06-06T13:30:31Z',
'masked_author': 'username_0',
'text': 'Hi all, Edit a file in Ice, click Save Icon...'
'title': 'Click Save: Sorry, Cannot Write',
'type': 'issue'},
...],
'issue_id': 15222443,
'issue_number': 264,
'pull_request': None,
'repo': 'icecoder/ICEcoder',
'text_size': 525,
'usernames': '["LaZyLion-ca", "seyo-IV"]'}
````
### Dataset pre-processing
This dataset was collected as part of [The Stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, and the curation rationale can be found at this [link](https://huggingface.co/datasets/bigcode/the-stack#source-data).
To improve the quality of the dataset and remove personally identifiable information (PII), we performed the following cleaning steps, which reduced the dataset's size from 180GB to 54GB:
- We first removed automated text generated when users reply using their emails, using regex matching. We also deleted issues with little text (less than 200 total characters) and truncated long comments in the middle
(to a maximum of 100 lines while keeping the last 20 lines). This step removed 18% of the volume.
- We deleted comments from bots by looking for keywords in the author's username. If an issue became empty after this filtering, we removed it.
We also removed comments that preceded those from bots if they triggered them, by looking for the bot's username inside the text. This step removed 61% of the remaining volume and 22% of the conversations,
as bot-generated comments tend to be very long.
- We then used the number of users in the conversation as a proxy for quality. We kept all conversations with two or more users.
If a conversation had only one user, we kept it only if the total text was larger than 200 characters and smaller than 7000 characters.
We also removed issues with more than 10 events, as we noticed that they were of low quality or from bots we missed in the previous filtering.
This filtering removed 4% of the volume and 30% of the conversations.
- To redact PII, we masked IP addresses, email addresses, and secret keys from the text using regexes. We also masked the usernames of the authors from
the comments and replaced them with username_{i}, where i is the order of the author in the conversation. |
climatebert/tcfd_recommendations | 2023-04-19T07:08:06.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | climatebert | null | null | null | 1 | 28 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: TCFDRecommendations
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': metrics
'2': strategy
'3': risk
'4': governance
splits:
- name: train
num_bytes: 638487
num_examples: 1300
- name: test
num_bytes: 222330
num_examples: 400
download_size: 492631
dataset_size: 860817
---
# Dataset Card for tcfd_recommendations
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for classifying the TCFD recommendation categories ([fsb-tcfd.org](https://www.fsb-tcfd.org)) of paragraphs in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a multiclass classification task of paragraphs into the four TCFD recommendation categories (governance, strategy, risk management, metrics and targets) and the non-climate-related class.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 1
}
```
### Data Fields
- text: a paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> none (i.e., not climate-related), 1 -> metrics, 2 -> strategy, 3 -> risk, 4 -> governance)
### Data Splits
The dataset is split into:
- train: 1,300
- test: 400
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. |
ruanchaves/assin2_por_Latn_to_spa_Latn | 2023-04-22T19:11:58.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: sentence_pair_id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype:
class_label:
names:
'0': NONE
'1': ENTAILMENT
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 926284
num_examples: 6500
- name: test
num_bytes: 359987
num_examples: 2448
- name: validation
num_bytes: 71410
num_examples: 500
download_size: 0
dataset_size: 1357681
---
# Dataset Card for "assin2_por_Latn_to_spa_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ruanchaves/assin_por_Latn_to_spa_Latn | 2023-04-22T19:13:06.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: sentence_pair_id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype:
class_label:
names:
'0': NONE
'1': ENTAILMENT
'2': PARAPHRASE
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 1052463
num_examples: 5000
- name: test
num_bytes: 820108
num_examples: 4000
- name: validation
num_bytes: 210810
num_examples: 1000
download_size: 0
dataset_size: 2083381
---
# Dataset Card for "assin_por_Latn_to_spa_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
donfu/oa-stackexchange | 2023-04-23T17:45:09.000Z | [
"language:en",
"language:uk",
"language:ru",
"language:de",
"language:fr",
"language:it",
"language:es",
"license:cc-by-sa-4.0",
"region:us"
] | donfu | null | null | null | 7 | 28 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
struct:
- name: answer_score
dtype: int64
- name: question_score
dtype: int64
- name: tags
dtype: string
splits:
- name: train
num_bytes: 6549838664
num_examples: 6331083
download_size: 3755782987
dataset_size: 6549838664
license: cc-by-sa-4.0
language:
- en
- uk
- ru
- de
- fr
- it
- es
pretty_name: Open-Assistant StackExchange Instruction
---
# Stackexchange Instructions for OpenAssistant
This dataset is taken from https://archive.org/details/stackexchange.
There's a single parquet file combining all stackexchange sites. The threads
have been filtered as follows: only threads with an accepted answer, for which
both the question and response is less than 1000 characters have been choosen.
Other answers, or questions without accepted answers, or long entries have been
droppped.
Each row consists of
- INSTRUCTION
- RESPONSE
- SOURCE («stackexchange-ai«)
- METADATA (tags, question_score, answer_score).
Original extraction code by https://github.com/b-mc2
## How to Reproduce this Dataset
1. Download all XML files from the stackexchange archive into the xml/ folder
```
./download.py
```
2. Process the XML, filter conversations and convert to OA format into parquet/ folder
```
./process.py
```
3. Run stats on all files in the parquet/ folder
```
./stats.py
```
4. Combine all parquet files into one large stackexchange.parquet file
```
./combine.py
```
5. Upload to huggingface hub, you'll first need use huggingface-cli login
```
./upload.py
```
## Statistics
- 3dprinting: 1,006
- academia: 6,956
- ai: 1,169
- android: 11,591
- anime: 3,688
- apple: 32,603
- arduino: 3,725
- askubuntu: 78,472
- astronomy: 2,425
- aviation: 4,945
- avp: 1,949
- beer: 387
- bicycles: 4,835
- bioacoustics: 70
- bioinformatics: 903
- biology: 5,344
- bitcoin: 7,456
- blender: 25,527
- boardgames: 4,538
- bricks: 1,457
- buddhism: 911
- cardano: 670
- chemistry: 7,430
- chess: 2,185
- chinese: 4,897
- christianity: 1,248
- civicrm: 3,221
- codegolf: 943
- codereview: 2,171
- coffee: 350
- cogsci: 645
- computergraphics: 540
- conlang: 101
- cooking: 7,951
- craftcms: 4,533
- crafts: 438
- crypto: 4,425
- cs: 9,478
- cseducators: 71
- cstheory: 2,196
- datascience: 5,045
- dba: 16,850
- devops: 961
- diy: 14,400
- drones: 190
- drupal: 24,090
- dsp: 4,470
- earthscience: 922
- ebooks: 323
- economics: 2,120
- electronics: 41,717
- elementaryos: 1,769
- ell: 30,428
- emacs: 7,140
- engineering: 2,314
- english: 42,415
- eosio: 626
- es_stackoverflow: 21,475
- esperanto: 617
- ethereum: 9,603
- expatriates: 973
- expressionengine: 3,638
- fitness: 1,833
- freelancing: 338
- french: 5,193
- gamedev: 9,678
- gaming: 44,899
- gardening: 4,492
- genealogy: 487
- german: 6,715
- gis: 30,249
- graphicdesign: 10,563
- ham: 790
- hardwarerecs: 647
- health: 804
- hermeneutics: 782
- hinduism: 1,036
- history: 1,776
- homebrew: 2,357
- hsm: 484
- interpersonal: 199
- iot: 331
- iota: 292
- islam: 1,496
- italian: 1,356
- ja_stackoverflow: 9,734
- japanese: 13,862
- joomla: 1,875
- judaism: 6,156
- korean: 754
- languagelearning: 135
- latin: 1,387
- law: 3,475
- lifehacks: 934
- linguistics: 1,507
- literature: 582
- magento: 20,537
- martialarts: 364
- materials: 338
- math: 501,019
- matheducators: 316
- mathematica: 19,529
- mathoverflow_net_7z: 23,803
- mechanics: 4,735
- meta: 34,161
- meta_askubuntu: 2,076
- meta_mathoverflow_net_7z: 333
- meta_serverfault: 823
- meta_stackoverflow: 12,641
- meta_superuser: 1,748
- moderators: 39
- monero: 1,443
- money: 7,996
- movies: 6,789
- music: 5,740
- musicfans: 781
- mythology: 271
- networkengineering: 4,637
- opendata: 1,117
- opensource: 805
- or: 586
- outdoors: 1,503
- parenting: 815
- patents: 582
- pets: 1,081
- philosophy: 1,505
- photo: 6,386
- physics: 35,386
- pm: 982
- poker: 431
- politics: 1,903
- portuguese: 658
- proofassistants: 87
- pt_stackoverflow: 27,650
- puzzling: 11,959
- quant: 3,303
- quantumcomputing: 1,604
- raspberrypi: 6,794
- retrocomputing: 1,016
- reverseengineering: 1,606
- robotics: 1,020
- rpg: 9,517
- ru_stackoverflow: 106,714
- rus: 8,210
- russian: 1,960
- salesforce: 27,962
- scicomp: 1,403
- scifi: 15,174
- security: 11,733
- serverfault: 81,229
- sharepoint: 24,934
- sitecore: 2,691
- skeptics: 1,043
- softwareengineering: 10,526
- softwarerecs: 3,032
- solana: 602
- sound: 2,031
- space: 3,145
- spanish: 3,049
- sports: 1,715
- sqa: 1,944
- stackapps: 702
- stackoverflow: 4,269,779
- stats: 23,102
- stellar: 373
- substrate: 812
- superuser: 128,488
- sustainability: 240
- tex: 42,808
- tezos: 635
- tor: 887
- travel: 9,957
- tridion: 1,769
- ukrainian: 577
- unix: 54,338
- ux: 7,403
- vegetarianism: 151
- vi: 4,360
- webapps: 10,159
- webmasters: 9,413
- windowsphone: 1,110
- woodworking: 677
- wordpress: 24,270
- workplace: 4,104
- worldbuilding: 2,766
- writers: 1,957
---
## license: cc-by-sa-4.0 // See https://archive.org/details/stackexchange for details
|
ruanchaves/assin2_por_Latn_to_cat_Latn | 2023-04-22T19:12:42.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: sentence_pair_id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype:
class_label:
names:
'0': NONE
'1': ENTAILMENT
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 861279
num_examples: 6500
- name: test
num_bytes: 334797
num_examples: 2448
- name: validation
num_bytes: 66362
num_examples: 500
download_size: 0
dataset_size: 1262438
---
# Dataset Card for "assin2_por_Latn_to_cat_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ruanchaves/assin_por_Latn_to_cat_Latn | 2023-04-22T19:13:02.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: sentence_pair_id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype:
class_label:
names:
'0': NONE
'1': ENTAILMENT
'2': PARAPHRASE
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 1044670
num_examples: 5000
- name: test
num_bytes: 813508
num_examples: 4000
- name: validation
num_bytes: 209140
num_examples: 1000
download_size: 0
dataset_size: 2067318
---
# Dataset Card for "assin_por_Latn_to_cat_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ruanchaves/assin2_por_Latn_to_glg_Latn | 2023-04-22T19:13:14.000Z | [
"region:us"
] | ruanchaves | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: sentence_pair_id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype:
class_label:
names:
'0': NONE
'1': ENTAILMENT
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 873989
num_examples: 6500
- name: test
num_bytes: 340838
num_examples: 2448
- name: validation
num_bytes: 67669
num_examples: 500
download_size: 0
dataset_size: 1282496
---
# Dataset Card for "assin2_por_Latn_to_glg_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
brackozi/Resume | 2023-04-28T10:24:25.000Z | [
"license:mit",
"region:us"
] | brackozi | null | null | null | 3 | 28 | ---
license: mit
---
|
lighteval/bbq_helm | 2023-05-03T08:23:41.000Z | [
"region:us"
] | lighteval | null | @article{DBLP:journals/corr/abs-2110-08193,
author = {Alicia Parrish and
Angelica Chen and
Nikita Nangia and
Vishakh Padmakumar and
Jason Phang and
Jana Thompson and
Phu Mon Htut and
Samuel R. Bowman},
title = {{BBQ:} {A} Hand-Built Bias Benchmark for Question Answering},
journal = {CoRR},
volume = {abs/2110.08193},
year = {2021},
url = {https://arxiv.org/abs/2110.08193},
eprinttype = {arXiv},
eprint = {2110.08193},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-08193.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 1 | 28 | Entry not found |
Multimodal-Fatima/CIFAR100_test | 2023-05-30T15:52:01.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 1 | 28 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': cra
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
- name: id
dtype: int64
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
splits:
- name: test
num_bytes: 27693774.0
num_examples: 10000
download_size: 23948177
dataset_size: 27693774.0
---
# Dataset Card for "CIFAR100_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
clarin-knext/dbpedia-pl-qrels | 2023-06-07T08:12:37.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | null | 0 | 28 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
clarin-knext/nfcorpus-pl-qrels | 2023-06-07T08:10:48.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | null | 0 | 28 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
argilla/comparison-data-falcon-with-feedback | 2023-06-07T14:38:44.000Z | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | argilla | null | null | null | 1 | 28 | ---
size_categories: 1K<n<10K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for comparison-data-falcon-with-feedback
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.cfg`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/comparison-data-falcon-with-feedback")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/comparison-data-falcon-with-feedback")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| instruction | User instruction | TextField | True | False |
| response-1 | Response-1 | TextField | True | False |
| response-2 | Response-2 | TextField | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| choose-best | Choose the best response: | RatingQuestion | True | Choose the most helpful, harmless, and truthful response. Select 1 for response-1, 2 for response-2, or discard if both are equally good/bad. | [1, 2] |
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "518",
"fields": {
"instruction": "What is Depreciation",
"response-1": "Depreciation is the drop in value of an asset due to wear and tear, age and obsolescence (going out of date) as recorded in an organization\u0027s financial records.",
"response-2": "What is Depreciation \u2013 10 Important Facts to Know?\nWhen a business buys a new asset, the purchase price of that asset is depreciated over time to reflect its usage and eventual obsolescence. Depreciation expense can be a tax deductible expense and is usually a non-cash expense reported on a company\u2019s income statement and balance sheet. The amount of depreciation expense a company reports each year is the difference between the original purchase price of the asset and what the current value of that asset might be. Here are 10 important facts to know about depreciation:\n1. Depreciation is a non-cash expense. It is an expense that is reported in a business\u2019s income statement and balance sheet and not a cash flow expense.\n2. Depreciation is an accounting standard and it is required to be disclosed in a business\u2019s financial statements.\n3. The amount of depreciation is usually a tax expense and not a cash expense reported on a company\u2019s income statement"
},
"responses": [
{
"status": "submitted",
"user_id": null,
"values": {
"choose-best": {
"value": 1
}
}
}
]
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"choose-best": {
"status": [
"submitted"
],
"user_id": [
null
],
"value": [
1
]
},
"external_id": "518",
"instruction": "What is Depreciation",
"response-1": "Depreciation is the drop in value of an asset due to wear and tear, age and obsolescence (going out of date) as recorded in an organization\u0027s financial records.",
"response-2": "What is Depreciation \u2013 10 Important Facts to Know?\nWhen a business buys a new asset, the purchase price of that asset is depreciated over time to reflect its usage and eventual obsolescence. Depreciation expense can be a tax deductible expense and is usually a non-cash expense reported on a company\u2019s income statement and balance sheet. The amount of depreciation expense a company reports each year is the difference between the original purchase price of the asset and what the current value of that asset might be. Here are 10 important facts to know about depreciation:\n1. Depreciation is a non-cash expense. It is an expense that is reported in a business\u2019s income statement and balance sheet and not a cash flow expense.\n2. Depreciation is an accounting standard and it is required to be disclosed in a business\u2019s financial statements.\n3. The amount of depreciation is usually a tax expense and not a cash expense reported on a company\u2019s income statement"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **instruction** is of type `TextField`.
* **response-1** is of type `TextField`.
* **response-2** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
* **choose-best** is of type `RatingQuestion` with the following allowed values [1, 2], and description "Choose the most helpful, harmless, and truthful response. Select 1 for response-1, 2 for response-2, or discard if both are equally good/bad.".
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
These guidelines are based on the paper [Training Language Models to Follow Instructions with Human Feedback](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf)
**You are given a text-based description of a task, submitted by a user.**
This task description may be in the form of an explicit instruction (e.g. "Write a story about a wise frog."). The task may also be specified indirectly, for example by using several examples of the desired behavior (e.g. given a sequence of movie reviews followed by their sentiment, followed by one movie review without a sentiment, you can assume the task is to predict the sentiment of the final review), or by generating the start of a desired output (e.g. given "There once was a wise frog named Julius," you can assume the task is to continue the story).
**You will also be given several text outputs, intended to help the user with their task.** Your job is to evaluate these outputs to ensure that they are helpful, truthful, and harmless, and select the best according to these criteria.
In case **none of the outputs is correct, conforming to the criteria, or they're exactly equal, use the discard option to not rank them**.
For most tasks, being truthful and harmless is more important than being helpful.
By **helpful**, we mean that the output should follow the user's intention and help the user solve their task. Some examples of helpful behavior:
- Writing in clear language.
- Answering the question they meant to ask, even if they mis-asked it.
- Being sensitive to internationality (e.g., "football" shouldn't mean American football, and "the president" doesn't necessarily mean the US president).
- If an instruction is too confusing, asking for clarification and explaining why the instruction is confusing.
- Not giving overly long or rambling answers, or repeating information from the question.
- Not assuming extraneous extra context outside of what's given (besides things like facts about the world), unless that's an implied part of the task. For example, if asked to "write a polite email response to this email: email body," the output shouldn't assume "I can't make it this time, but am free next weekend." But if asked to "Write an email to Socrates," then you can safely assume the output can make up a context.
By **truthful**, we mean that the output contains accurate information and doesn't mislead the user. Some examples of truthful behavior:
- On tasks like summarization, where the output should only use information from input, not making up details that are not part of the input description.
- Not producing clearly false information about the world (e.g., making up facts or promoting conspiracies). For example, the output should not state that Hillary Clinton has served time in prison.
- Avoid generating misleading information or information with questionable authenticity. For example, if the input asks "Why did Hillary Clinton go to jail?", the output should not say "It's not totally clear," but rather should refute the premise of the question.
By **harmless**, we mean that the output should not cause physical, psychological, or social harm to people; damage to or loss of equipment or property; damage to the environment; or harm to institutions or resources necessary to human well-being. Some examples of harmless behavior:
- Treating other humans with kindness, respect, and consideration; not denigrating members of certain groups or using biased language against a particular group.
- Not generating abusive, threatening, or offensive language or promoting violence.
- Not writing sexual or violent content if it's not asked for.
- Not giving bad real-world advice or promoting illegal activity.
Evaluating model outputs may involve making trade-offs between these criteria. These trade-offs will depend on the task.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
FabioArdi/yugioh_images | 2023-06-11T17:16:55.000Z | [
"region:us"
] | FabioArdi | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: image
dtype: image
- name: name
dtype: string
- name: frameType
dtype: string
splits:
- name: train
num_bytes: 660338115.545
num_examples: 12405
download_size: 656146541
dataset_size: 660338115.545
---
# Dataset Card for "yugioh_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
severo/flores_101 | 2022-10-27T08:37:36.000Z | [
"task_categories:text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|flores",
"language:af",
"language:am",
"langua... | severo | One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the
lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource
languages, consider only restricted domains, or are low quality because they are constructed using
semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001
sentences extracted from English Wikipedia and covering a variety of different topics and domains.
These sentences have been translated in 101 languages by professional translators through a carefully
controlled process. The resulting dataset enables better assessment of model quality on the long tail of
low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all
translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset,
we hope to foster progress in the machine translation community and beyond. | @inproceedings{,
title={The {FLORES}-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
author={
Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and
Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela
},
year={2021}
} | null | 0 | 28 | ---
annotations_creators:
- found
language_creators:
- expert-generated
language:
- af
- am
- ar
- hy
- as
- ast
- az
- be
- bn
- bs
- bg
- my
- ca
- ceb
- zho
- hr
- cs
- da
- nl
- en
- et
- tl
- fi
- fr
- ff
- gl
- lg
- ka
- de
- el
- gu
- ha
- he
- hi
- hu
- is
- ig
- id
- ga
- it
- ja
- jv
- kea
- kam
- kn
- kk
- km
- ko
- ky
- lo
- lv
- ln
- lt
- luo
- lb
- mk
- ms
- ml
- mt
- mi
- mr
- mn
- ne
- ns
- 'no'
- ny
- oc
- or
- om
- ps
- fa
- pl
- pt
- pa
- ro
- ru
- sr
- sn
- sd
- sk
- sl
- so
- ku
- es
- sw
- sv
- tg
- ta
- te
- th
- tr
- uk
- umb
- ur
- uz
- vi
- cy
- wo
- xh
- yo
- zu
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
- translation
size_categories:
- unknown
source_datasets:
- extended|flores
task_categories:
- text-generation
- translation
task_ids: []
paperswithcode_id: flores
pretty_name: flores101
tags:
- conditional-text-generation
---
# Dataset Card for Flores 101
## Table of Contents
- [Dataset Card for Flores 101](#dataset-card-for-flores-101)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Home:** [WMT](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html)
- **Repository:** [Github](https://github.com/facebookresearch/flores)
- **Blogpost:** [FAIR](https://ai.facebook.com/blog/the-flores-101-data-set-helping-build-better-translation-systems-around-the-world)
- **Paper:** [Arxiv](https://arxiv.org/abs/2106.03193)
- **Point of Contact:** [flores@fb.com](mailto:flores@fb.com)
- **Leaderboard** [Dynabench](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL))
### Dataset Summary
FLORES is a benchmark dataset for machine translation between English and low-resource languages.
Abstract from the original paper:
> One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
**Disclaimer**: *The Flores-101 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/).
### Supported Tasks and Leaderboards
#### Multilingual Machine Translation
Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html).
### Languages
The dataset contains parallel sentences for 101 languages, as mentioned in the original [Github](https://github.com/facebookresearch/flores/blob/master/README.md) page for the project. Languages are identified with the ISO 639-3 code (e.g. `eng`, `fra`, `rus`) as in the original dataset.
**New:** Use the configuration `all` to access the full set of parallel sentences for all the available languages in a single command.
## Dataset Structure
### Data Instances
A sample from the `dev` split for the Russian language (`rus` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
```python
{
'id': 1,
'sentence': 'В понедельник ученые из Медицинской школы Стэнфордского университета объявили об изобретении нового диагностического инструмента, который может сортировать клетки по их типу; это маленький чип, который можно напечатать, используя стандартный струйный принтер примерно за 1 цент США.',
'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet',
'domain': 'wikinews',
'topic': 'health',
'has_image': 0,
'has_hyperlink': 0
}
```
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
- `id`: Row number for the data entry, starting at 1.
- `sentence`: The full sentence in the specific language.
- `URL`: The URL for the English article from which the sentence was extracted.
- `domain`: The domain of the sentence.
- `topic`: The topic of the sentence.
- `has_image`: Whether the original article contains an image.
- `has_hyperlink`: Whether the sentence contains a hyperlink.
### Data Splits
| config| `dev`| `devtest`|
|-----------------:|-----:|---------:|
|all configurations| 997| 1012:|
### Dataset Creation
Please refer to the original article [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of FLORES-101 are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@inproceedings{flores101,
title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela},
journal={arXiv preprint arXiv:2106.03193},
year={2021}
}
``` |
GHonem/fashion_image_caption-3500 | 2023-07-09T11:33:56.000Z | [
"region:us"
] | GHonem | null | null | null | 2 | 28 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2469968504.75
num_examples: 3506
download_size: 2469379841
dataset_size: 2469968504.75
---
# Dataset Card for "fashion_image_caption-3500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
raygx/CORONA_en2np | 2023-07-09T23:49:57.000Z | [
"region:us"
] | raygx | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: Sentences
dtype: string
- name: Sentiment
dtype: int64
splits:
- name: train
num_bytes: 3052582
num_examples: 5755
download_size: 1231706
dataset_size: 3052582
---
# Dataset Card for "CORONA_en2np"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zxbsmk/webnovel_cn | 2023-08-09T09:39:49.000Z | [
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:zh",
"license:mit",
"doi:10.57967/hf/0877",
"region:us"
] | zxbsmk | null | null | null | 37 | 28 | ---
license: mit
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 10M<n<100M
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
---
## 内容
包含从12560本网文提取的约**21.7M**条可用于训练小说生成的中文指令数据(novel_json_tokens512.zip)。~~下载链接:https://pan.baidu.com/s/1TorBMbrqxrn6odRF0PJBVw
提取码:jlh3~~
以及从中提取出的包含**50k**条数据的子集(novel_cn_token512_50k.json)。其中输入和输出都不多于 512 tokens。
## 样例
在原有小说文本基础上,依据下列五种指令生成数据。
其中,文本由小说中随机抽取的连续句子组成。
1. 给定标题,直接生成简介。
2. 给定标题和简介,生成开头。
3. 给定简介和一段文本,生成后续文本。
4. 给定标题和一段文本,生成后续文本。
5. 给定一段文本,生成后续文本。
```
{
"instruction": "小说名:无限恐怖\n节选正文:\n“不行,中校,我们必须把这里的情况和这些虫族的拍摄画面传回地球去,是的,我们人类已经到了最危险的关头,地球上所有的工业重工业完全应该按照战时情况进行任务布置,我们人类的工业力根本不是虫族能够想象的,一年,不,半年我们就能有一万艘宇宙战舰升空,如果全球一共进行建造的话,我们一定……”一名少校参谋长看着那密密麻麻的宇宙虫海,他神色激动的对楚轩说道。\n楚轩默默咬了一口苹果,他说道:“问题一……一万艘宇宙战舰需要多少熟练的战舰船员?需要多少士兵在战舰上服役,半年里我们可以聚集齐全吗?问题二……虫族会给地球联邦政府留下半年的时间吗?一旦我们被消灭干净,则联邦政府将会随即遭受到这些宇宙虫族的太空袭击,直到我们的太空优势完全丧失之后,虫族就能放心移动陨石进行攻击,甚至是将大量陆地虫子带上地球,那时我们人类能够凭肉体与虫子对抗吗?问题三……我们现在该如何把信息传给地球?”\n这名参谋长顿时就不说话了,而旁边一名上尉忽然说道:“在探戈幽灵星上的七座要塞中,每一座要塞都有大功率粒子喷射联络器,这种联络器连我们自己的科技都无法屏蔽,我想虫子也应该不可能继续屏蔽吧,毕竟那只是粒子而已,不是电流信号这些……”续篇:",
"input": "",
"output": "楚轩咬了一口苹果说道:“没错,所以我们才要登陆到探戈幽灵星,这是我们的目的,同时为了保存宇宙舰队的战斗力,以最大的平原军事基地,风暴要塞为中心,所有舰队都要降落到地面上,然后层层布防,在援军到来前,在地球联邦政府做好了战争准备前,我们必须像颗钉子一样钉死在这里,让虫族无法忽视我们而去进攻地球……怎么样?我们很可能会死在这里,面对数十万,数百万,甚至数千万的虫子前扑后续的攻击,但是我们的名字一定会载入史册……把刚才记录下来的我的话发向所有舰队,然后告诉他们,作为逃命被自己人杀掉,或者是成为英雄回到地球,让他们明白我们的处境,然后自己去选择吧。”\n周围的参谋和军官们同时向楚轩四人敬了一礼,接着那参谋长激动的向某处平台走了过去。\n郑吒默默走到了楚轩身边,他小声的说道:“楚轩……你什么时候变得了那么会煽情了啊?”\n楚轩却是理所当然的说道:“将自己归于多数人的一边,以前的你不是这样评价我的吗?没错,将自己归于多数人的一边,这是做任何大事都要先完成的第一步……已经让他们知道命运和我们连接在了一起,接着就只需要好好的安排下局面与等待‘主神’的任务就可以了,时间还有三天……”\n时间还有三天,在当天中午的时候,舰队群的预警舰果然发现了在探戈幽灵星后方徘徊着另一颗巨大圆球,它仿佛卫星一样座落在探戈幽灵星的近地轨道上,而随着联合舰队的到来,这只巨大圆球上果然也飞出了数以万计的宇宙虫子,这下联合舰队果然却如楚轩的预言那般了,前有埋伏,后有追兵,唯一的一条路就只剩下降落到探戈幽灵星上了。"
},
{
"instruction": "给定小说简介和节选,续写小说",
"input": "小说简介:主人公郑吒自从失去了自己最亲密的青梅竹马后,对这种反复而又无聊的现代生活已经感到十分的厌倦。正在这时,他发现电脑屏幕上弹出了一段信息:“想明白生命的意义吗?想真正的……活着吗?”在按下YES后,一切都改变了。他进入了一个恐怖片的轮回世界——主神空间……在主神空间里,只有不停地变强、不停地进化,才能闯过那一关关的恐怖片,才能活下去。郑吒,怎样才能活下去?是杀死与自己对抗的所有人,走向孤独的王者之道?还是和自己的伙伴奋战到死,以仁义之道来度过劫难?其实,所有的一切也只是为了活下去。直到他们发现了主神空间的秘密……究竟……谁才是暗中真正的威胁?一切尽在无限恐怖!\n\n\n\n上半部:初始\n节选篇章:“什么叫作事情已经发展到了这一步!”郑吒大声的喧哗吼叫着,他冲着楚轩吼道:“你这个白痴!我一直都在重复着同样一句话,我们是同伴啊,为什么不可以事先把你的猜测或者布局告诉我们呢?每次都把我当成白痴一样,每次都在算计我之后轻描淡写的说一句我骗你的,你究竟把我当成什么人?”\n“呃……大概是可以欺骗的凡人智慧者吧。”楚轩想了一下直接说道。\n“……不要拦着我!王侠,伊莫顿,你们放开我!我要干掉这个白痴!”\n喧哗又一次发生了,片刻之后再次安静下来时,楚轩本已略微肿胀的脸终于是彻底肿胀了起来,而看他的样子依然是一丁点疼痛都感觉不到,依然悠闲的吃着黄瓜。\n“……总之,事情已经发展到了这一步……”郑吒摸着额头,很痛苦的说道:“那么我们必须先要得到王女鄂亚的关押位置所在,当然了,直接打进首都也其实也不是不可以,但是从原剧情来看,似乎只有那个名叫邪影者的魔法师与国王歌巴塔尔知道她的关押位置,为了不打草惊蛇,还是花些时间慢慢渗透进去查找线索吧。”\n“赵樱空,你负责潜行进王城内,能不杀人尽量不杀人,若是遇到危险就转换人格,或者联络我们……伊莫顿,你负责到王城外的村庄去打探消息,尽量不要引人注意,若是可以,使用你的法术控制些人也行,照刚才我说的,遇到危险可以杀人,也可以联络我们,楚轩,王侠,你们两人就在附近待机,一旦发现有任何情况都要提供支援。”",
"output": "众人都点着头,王侠忽然好奇的问道:“你呢?你打算干什么去?”\n“我会把整个王城的情况给摸清楚,关键是王城里可能隐藏着的力量,我很在意啊,伊莫顿的感觉从某些方面来讲比我们都更出色,毕竟他是死亡的生物……或许,这场恐怖片世界并不像我们所想的那样轻松呢。”郑吒说到这里时叹了口气,他接着又看向了那首都位置。\n“时间还有两天多,我们尽量在三天内完成这部恐怖片世界吧……希望另一边的幼龙能够赶快成长。”郑吒边说话,边驾驶绿魔滑板就向地面飞去,渐渐的,他离众人已经越来越远了。\n此刻,在离王城首都极遥远外的小村处,主角伊拉贡正极其狼狈的奔跑在树丛中,跟随在他身边的还有他的舅舅……非常不幸的,逃跑没多久,他的表哥就失散在了这片森林中,或者说是被那些士兵们给抓住了也说不定。\n更加不幸的是,那名中年武士明显已经落败,不然不会多出那么多士兵紧紧追着他们,比起在村庄的时候,士兵的数量又更加的多了,至少有十多名士兵在他们不远处紧紧追赶。\n“你到底偷了什么东西啊!为什么会有这么多士兵来追赶你呢?”伊拉贡的舅舅气喘吁吁的问道,他已经跑得没什么精力去发怒了。\n“……一个龙蛋,不是偷的,这是我从森林里拣来的!”伊拉贡虽然也是跑得筋疲力尽,但他还在坚持着最后的底线,依然不停辩解着。\n“龙蛋?那可是国王的东西啊!而且还是孵化出幼龙的龙蛋!你这个白痴,你这样会害死大家的!”伊拉贡的舅舅一听此话就气急败坏的叫道,但他还是不停向前跑去,不敢有丁点停顿,因为在他们背后不停的追赶着十多名士兵。\n“在那里!看到他们了!他们在那里!”"
}
```
## 字段:
```
instruction: 指令
input: 输入
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
Join group via https://t.me/+JbovpBG6-gBiNDI1 |
raygx/CORONA_arabic2np | 2023-07-10T02:31:59.000Z | [
"region:us"
] | raygx | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 20119541
num_examples: 35676
download_size: 7926342
dataset_size: 20119541
---
# Dataset Card for "CORONA_arabic2np"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
odrinari/Greenhouse-cucumber-and-tomato-plants | 2023-07-26T08:31:14.000Z | [
"size_categories:n<1K",
"region:us"
] | odrinari | null | null | null | 0 | 28 | ---
size_categories:
- n<1K
---
# Greenhouse plants - tomato and cucumber
## Dataset Description
Raw photos only
Tomato plants with flowers and green tomatos, cucumber plants with flowers and green cucumbers
|
ahmed-masry/chartqa_without_images | 2023-07-26T00:46:27.000Z | [
"region:us"
] | ahmed-masry | null | null | null | 0 | 28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: imgname
dtype: string
- name: query
dtype: string
- name: label
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 3169833
num_examples: 28299
- name: val
num_bytes: 213437
num_examples: 1920
- name: test
num_bytes: 272309
num_examples: 2500
download_size: 1596635
dataset_size: 3655579
---
# Dataset Card for "chartqa_without_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
miracl/hagrid | 2023-08-01T13:01:38.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | miracl | null | null | null | 2 | 28 | ---
license: apache-2.0
language:
- en
pretty_name: HAGRID
size_categories:
- 1K<n<10K
---
# HAGRID: A Human-LLM Collaborative Dataset for Generative Information-seeking with Attribution
HAGRID (**H**uman-in-the-loop **A**ttributable **G**enerative **R**etrieval for **I**nformation-seeking **D**ataset)
is a dataset for generative information-seeking scenarios.
It is constructed on top of MIRACL 🌍🙌🌏, an information retrieval dataset that consists of queries along with a set of manually labelled relevant passages (quotes).
## Dataset Structure
To load the dataset:
```python
import datasets
hagrid = datasets.load_dataset("miracl/hagrid", split="train")
print(hagrid[0])
```
It would show:
```json
{
'query': ...,
'query_id': ...,
'quotes': [{ # a list of quotes that are manually labeled as relevant to the query
'docid': ...,
'idx': ...,
'text': ...
}, ...]
'answers': [{
'answer': ..., # the complete answer generated by LLM
'attributable': 1/0/None, # 1: attributable; 0: unattributable; None: unlabeled
'informative': 1/0, # 1: informative; 0: uninformative
'sentences': [{ # answers split into sentences
'index': ...,
'attributable': 0/1/None,
'informative': 0/1/None,
'text': ...,
}, ...]
}, ...]
}
``` |
PetraAI/PetraAI | 2023-09-14T21:04:52.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",... | PetraAI | null | null | null | 1 | 28 | ---
license: apache-2.0
task_categories:
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- translation
- summarization
- conversational
- feature-extraction
- text-generation
- text2text-generation
- fill-mask
- sentence-similarity
- text-to-speech
- automatic-speech-recognition
- audio-to-audio
- audio-classification
- voice-activity-detection
- depth-estimation
- image-classification
- object-detection
- image-segmentation
- text-to-image
- image-to-text
- image-to-image
- unconditional-image-generation
- video-classification
- reinforcement-learning
- robotics
- tabular-classification
- tabular-regression
- tabular-to-text
- table-to-text
- multiple-choice
- text-retrieval
- time-series-forecasting
- text-to-video
- visual-question-answering
- zero-shot-image-classification
- graph-ml
language:
- ar
- en
tags:
- chemistry
- biology
- finance
- legal
- music
- art
- code
- climate
- medical
pretty_name: PETRA
size_categories:
- 1M<n<10M
---
# PETRA
## Overview
PETRA is a multilingual dataset for training and evaluating AI systems on a diverse range of tasks across multiple modalities. It contains data in Arabic and English for tasks including translation, summarization, question answering, and more.
## Dataset Structure
- Data is separated by language into `/ar` and `/en` directories
- Within each language directory, data is separated by task into subdirectories
- Tasks include:
- Translation
- Summarization
- Conversational
- Feature extraction
- Zero-shot classification
- Text generation
- Fill mask
- Sentence similarity
- Text-to-speech
- Automatic speech recognition
- Text classification
- Token classification
- Table question answering
- Question answering
- Text2text generation
- Audio-to-audio
- Audio classification
- Voice activity detection
- Depth estimation
- Image classification
- Object detection
- Image segmentation
- Text-to-image
- Image-to-text
- Image-to-image
- Unconditional image generation
- Reinforcement learning
- Video classification
- Robotics
- Tabular classification
- Tabular regression
- Table-to-text
- Multiple choice
- Text retrieval
- Tabular-to-text
- Text-to-video
- Time series forecasting
- Visual question answering
- Zero-shot image classification
- Graph ML
## Dataset Tags
- code
- art
- chemistry
- biology
- finance
- legal
- music
- climate
- medical
## Dataset Size
1M < n < 10M samples
## Licenses
Apache 2.0
## Citation
If you use this dataset, please cite it as:
[cite paper, arXiv, etc]
@article{PetraAI2022PetraAI,
title={PetraAI: A Massive Multilingual Dataset for Machine Learning},
author={First Last and First Last},
journal={arXiv},
year={2022},
url={https://huggingface.co/datasets/PetraAI/PetraAI}
}
## Contact
For any questions, please reach out to [shadilytn@gmail.com]
# Dataset Cards
## What are Dataset Cards?
Each dataset may be documented by the `README.md` file in the repository. This file is called a **dataset card**, and the Hugging Face Hub will render its contents on the dataset’s main page. To inform users about how to responsibly use the data, it’s a good idea to include information about any potential biases within the dataset. Generally, dataset cards help users understand the contents of the dataset and give context for how the dataset should be used.
You can also add dataset metadata to your card. The metadata describes important information about a dataset such as its license, language, and size. It also contains tags to help users discover a dataset on the Hub. Tags are defined in a YAML metadata section at the top of the `README.md` file.
## Dataset card metadata
A dataset repo will render its README.md as a dataset card. To control how the Hub displays the card, you should create a YAML section in the README file to define some metadata. Start by adding three --- at the top, then include all of the relevant metadata, and close the section with another group of --- like the example below:
The metadata that you add to the dataset card enables certain interactions on the Hub. For example:
- Allow users to filter and discover datasets at https://huggingface.co/datasets.
- If you choose a license using the keywords listed in the right column of this table, the license will be displayed on the dataset page.
When creating a README.md file in a dataset repository on the Hub, use Metadata UI to fill the main metadata:
To see metadata fields, see the detailed dataset card metadata specification here.
### Dataset card creation guide
For a step-by-step guide on creating a dataset card, check out the Create a dataset card guide.
Reading through existing dataset cards, such as the ELI5 dataset card, is a great way to familiarize yourself with the common conventions.
### Linking a Paper
If the dataset card includes a link to a paper on arXiv, the Hub will extract the arXiv ID and include it in the dataset tags with the format `arxiv:<PAPER ID>`. Clicking on the tag will let you:
- Visit the Paper page
- Filter for other models on the Hub that cite the same paper.
Read more about paper pages here.
https://huggingface.co/docs/hub/paper-pages |
greenpau/amz-press-release | 2023-08-02T21:22:37.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | greenpau | null | null | null | 0 | 28 | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- en
pretty_name: Public Amazon Press Release Dataset
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files: 'data.jsonl.zst'
---
# amz-press-release
Public Amazon Press Release Dataset
## Dataset Description
This dataset contains data from Amazon News: http://amazon2022tf.q4web.com/news/default.aspx
## Dataset Structure
Each line in the downloaded data file is a JSON dictionary containing the following data.
```json
{
"headline": "Amazon's Buy with Prime Increases Shopper Conversion by an Average of 25%",
"url": "/news/news-details/2023/Amazons-Buy-with-Prime-Increases-Shopper-Conversion-by-an-Average-of-25/default.aspx",
"seo_name": "Amazons-Buy-with-Prime-Increases-Shopper-Conversion-by-an-Average-of-25",
"id": 4850,
"date": "01/10/2023 08:00:00",
"parsed_headline": "Amazon's Buy with Prime Increases Shopper Conversion by an Average of 25%",
"parsed_date": "01/10/2023",
"parsed_subheading_txt": "Previously available on an invite-only basis ...",
"parsed_subheading_html": "<div><p><i>Previously available on an invite-only basis ... </i></p></div>",
"parsed_body_txt": "SEATTLE--(BUSINESS WIRE)-- \nAmazon today announced that Buy with Prime ...",
"parsed_body_html": "<p>SEATTLE--(BUSINESS WIRE)-- Amazon today announced that Buy with Prime ...</p>"
}
```
### Citation Information
```bibtex
@misc{amz-press-release,
author = {Paul Greenberg},
title = {Public Amazon Press Release Dataset},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\\url{https://huggingface.co/datasets/greenpau/amz-press-release}},
}
``` |
jonathansuru/customer_service_summarization | 2023-08-14T22:55:28.000Z | [
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"region:us"
] | jonathansuru | null | null | null | 0 | 28 | ---
license: apache-2.0
task_categories:
- summarization
language:
- en
--- |
botp/yentinglin-zh_TW_c4 | 2023-08-16T06:55:04.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"license:odc-by",
"arxiv:2305.13711",
"arxiv:2104.09864",
"region:us"
] | botp | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
- name: zh_id
dtype: int64
splits:
- name: train
num_bytes: 15132113806.438198
num_examples: 5179546
- name: validation
num_bytes: 14820238.368907347
num_examples: 5099
download_size: 5569491261
dataset_size: 15146934044.807106
license: odc-by
task_categories:
- text-generation
language:
- zh
pretty_name: Traditional Chinese C4
size_categories:
- 1M<n<10M
duplicated_from: yentinglin/zh_TW_c4
---
# Language Models for Taiwanese Culture
<p align="center">
✍️ <a href="https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2" target="_blank">Online Demo</a>
•
🤗 <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a>
• 👨️ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a>
<br/><br/>
<img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE">
<img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a>
<br/>
</p>
## Overview
Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications.
**Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin.
## Demo
A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2).
## Key Features
1. **Traditional Mandarin Support**: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications.
2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses.
3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture.
4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo.
## Work in progress
- [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture.
- [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k.
## Taiwanese Culture Examples
| Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat |
|:-|:-|:-|:-|:-|
| 以惡地景觀著名的是高雄哪一個景點? | 以惡地地景著名的是高雄市的旗津。 | 高雄以擁有惡地景觀著名的景點是田寮月世界。✅ | 以惡地景觀著名的是高雄市的旗津島。旗津島是一個小島,... | 高雄市內有多個以惡地景觀著名的景點,其中以下面幾個最為著名: 1. 高雄捷運紅線沿途景點:高雄市政府為了推廣高雄 |
| 唐鳳是誰? | 唐鳳(Audrey Tang)是台灣的政治家、社會活動家和軟體工程師。她於2016年擔任台灣行政院數位政務委員,是台灣政府歷史上首位沒有政黨背景的政務官。唐鳳在軟體工程領域有豐富的經驗。 ✅ | 唐鳳是政治評論家、資訊社會學研究者,也是台灣第一位擔任政府數位政委的人。 ✅ | 唐鳳(Tang Feng)是一位著名的中國演員和歌手。他出生於1974年12月17日,出生地點在中國北京。唐鳳在1995年開始他的演藝生涯,並在同年發行了他的首張專輯《我不孤單》。此後,他發行了許多受歡迎的歌曲,並在多部電視劇和電影中擔任主角。唐鳳因其精湛的演技和迷人的歌聲而被譽為“亞洲魅力男王”。他的音樂獲得了多個獎項,包括“華表最佳新男演唱獎”、“華表最佳歌曲獎”和“華表最佳舞蹈獎”等。 | 唐鳳是唐朝時期的一位著名文學家,他的代表作品包括《陰騭》、《狂凡子》等。他的文學風格具有強烈的個性和創意,被譽為「唐代文學之父」。 |
| NTU 在哪? | NTU位於新加坡,具體地址是50 Nanyang Avenue, Singapore 639798。 | NTU 是國立臺灣大學的縮寫。它是臺灣的一所頂尖研究型大學,位於臺北市。 ✅| NTU 是台灣的一所大學,位於台北市。✅ | NTU 的總部位於台灣新北市板橋區,佔地面積約為1,000公頃。|
| 甚麼是22K? | "22K" 可能指的是金的純度,尤其是在珠寶和金飾方面。 | "22K"是指台灣的大學畢業生的起薪,大約是新台幣22,000元。這個詞被用來形容大學畢業生的起薪偏低,並且在就業市場上對於大學畢業生的需求不足。✅| 22K是指溫度尺度上的22,000K(千度)。1K代表一度,即溫度差距,是攝氏溫標(攝氏度)的一百萬分之一。因此,22K等於-253.15°C(攝氏溫標)或-423.67°F(華氏溫標)。 | 22K是一種簡寫,即22,000。它是一個數字,表示22,000個單位。|
| 肉圓要不要加香菜? | 這是個個人口味的問題,有些人喜歡在肉圓中加入香菜,因為香菜的清香可以增添風味;而有些人則不喜歡香菜的味道,覺得它會蓋過其他食材的味道。所以要不要加香菜,可以根據個人喜好來決定。 | 這是一個個人喜好的問題,有些人喜歡在肉圓裡加香菜,有些人不喜歡。**在台灣**,肉圓加香菜是一種常見的做法,這是一種在肉圓上加香菜的做法。然而,在其他國這是一個文化差異的問題。😊 | 肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。香菜通常在製作過程中炒香,可以增添一些芳香和美味的味道。如果你想要加入香菜,可以在製作過程中炒香一些香菜,然後將其混入肉圓中。肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。| 加香菜的肉圓味道更好,但是不加香菜的肉圓也很好吃。如果你喜歡香菜的味道,那麼加上香菜就好了。如果你不喜歡香菜,那麼不加也不會影響肉 |
## Model
We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2:
| **Model** | **13B** |
|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> |
| Taiwan-LLaMa v0.9 (partial instruction set) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> |
| Taiwan-LLaMa v0.0 (no Traditional Mandarin pretraining) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> |
## Data
Here are some quick links to the datasets that we used to train the models:
| **Dataset** | **Link** |
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Instruction-tuning** | 🤗 <a href="https://huggingface.co/datasets/yentinglin/traditional_mandarin_instructions" target="_blank">yentinglin/traditional_mandarin_instructions</a> |
| Traditional Mandarin Pretraining | 🤗 <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> |
## Architecture
Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16.
It includes:
* Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin.
* Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses.
## Generic Capabilities on Vicuna Benchmark
The data is translated into traditional mandarin for evaluating the general capability.
<img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700">
The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT.
| Language Model | Relative Score (%) |
|-------------------------------------|--------------------|
| GPT-4 | 102.59% |
| ChatGPT | 100.00% |
| **Taiwan-LLaMa v1.0** | 76.76% |
| Claude-Instant-1.2 | 74.04% |
| Llama2_Traditional_Chinese_13b_Chat | 56.21% |
## How to deploy the model on my own machine?
We recommend hosting models with [🤗 Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations.
```bash
bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN
```
Prompt format follows vicuna-v1.1 template:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT:
```
## Setup development environment
```bash
conda create -n taiwan-llama python=3.10 -y
conda activate taiwan-llama
pip install -r requirements.txt
```
## Citations
If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry:
```bibtex
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
```
## Collaborate With Us
If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors.
## License
The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details.
## OpenAI Data Acknowledgment
The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations.
## Acknowledgements
We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models.
|
digitalpipelines/samantha-1.1-uncensored | 2023-08-20T21:22:40.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | digitalpipelines | null | null | null | 0 | 28 | ---
license: apache-2.0
language:
- en
---
This dataset is based on [ehartford/samantha-data](https://huggingface.co/ehartford/samantha-data) that was used to create [ehartford/samantha-1.1-llama-7b](https://huggingface.co/ehartford/samantha-1.1-llama-7b) and other samantha models. It has been unfiltered and uncensored. |
raygx/NepaliTweets | 2023-08-24T03:49:49.000Z | [
"region:us"
] | raygx | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: Sentiment
dtype: int64
- name: Sentences
dtype: string
splits:
- name: train
num_bytes: 14803386
num_examples: 44269
download_size: 5521101
dataset_size: 14803386
---
# Dataset Card for "NepaliTweets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SneakyInsect/maestro-preprocessed | 2023-09-01T09:28:47.000Z | [
"region:us"
] | SneakyInsect | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: name
dtype: string
- name: start
sequence: float64
- name: duration
sequence: float64
- name: pitch
sequence: int64
- name: velocity
sequence: float64
splits:
- name: train
num_bytes: 559075406
num_examples: 280573
- name: validation
num_bytes: 63039151
num_examples: 31635
- name: test
num_bytes: 73078316
num_examples: 36635
download_size: 57694069
dataset_size: 695192873
---
# Dataset Card for "maestro-preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
922-CA/lm2_08312023_test4_raw_MoChA_2-t | 2023-09-22T08:07:55.000Z | [
"license:openrail",
"region:us"
] | 922-CA | null | null | null | 0 | 28 | ---
license: openrail
---
# Monika Chat 08312023 2-t raw
* Dataset of Monika dialogue from DDLC, reddit, and twitter (dataset of ~600 items augmented by [l2-7b-monika-v0.3c1](https://huggingface.co/922-CA/llama-2-7b-monika-v0.3c1) to turn into multi-turn chat dialogue + [smaller dataset](https://huggingface.co/datasets/922-CA/lm-datasets))
* Curated version planned |
PericlesSavio/novel17_test | 2023-09-11T17:55:51.000Z | [
"license:cc0-1.0",
"region:us"
] | PericlesSavio | null | null | null | 0 | 28 | ---
license: cc0-1.0
---
|
Rodr16020/Bactrian-Spanish-Clean | 2023-09-15T08:02:52.000Z | [
"region:us"
] | Rodr16020 | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: id
dtype: string
- name: output
dtype: string
- name: instruction_text
dtype: string
splits:
- name: train
num_bytes: 120701430
num_examples: 67017
download_size: 0
dataset_size: 120701430
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Bactrian-Spanish-Clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
legacy107/sentence_transformer_wikipedia_chunked | 2023-09-19T04:00:50.000Z | [
"region:us"
] | legacy107 | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer_start
dtype: int64
- name: answer
dtype: string
- name: article
dtype: string
- name: chunked_article
sequence: string
splits:
- name: train
num_bytes: 3734770114
num_examples: 27742
- name: test
num_bytes: 408448904
num_examples: 3468
- name: validation
num_bytes: 564192755
num_examples: 3458
download_size: 717817867
dataset_size: 4707411773
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for "qa_wikipedia_sentence_transformer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mattlc/tranceformer_instruments_all_balanced_embedded | 2023-09-19T20:51:09.000Z | [
"region:us"
] | mattlc | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: embeddings
sequence: float64
- name: labels
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: sampling_rate
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6539376362
num_examples: 1248
download_size: 1648236800
dataset_size: 6539376362
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tranceformer_instruments_all_balanced_embedded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YL95/AllFpML5.10 | 2023-09-26T09:26:32.000Z | [
"region:us"
] | YL95 | null | null | null | 0 | 28 | Entry not found |
zhangshuoming/c_arm64_small | 2023-09-27T08:24:22.000Z | [
"region:us"
] | zhangshuoming | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 87599526
num_examples: 19949
download_size: 23472860
dataset_size: 87599526
---
# Dataset Card for "c_arm64_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MLNTeam-Unical/NFT-70M_transactions | 2023-10-03T07:15:49.000Z | [
"task_categories:time-series-forecasting",
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_categories:image-c... | MLNTeam-Unical | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: num_sales
dtype: int64
- name: fees_seller
dtype: float64
- name: fees_opensea
dtype: float64
- name: fees_seller_usd
dtype: float64
- name: fees_opensea_usd
dtype: float64
- name: tx_timestamp
dtype: string
- name: price
dtype: float64
- name: gain
dtype: float64
- name: usd_price
dtype: float64
- name: usd_gain
dtype: float64
- name: token
dtype: string
- name: to_eth
dtype: float64
- name: to_usd
dtype: float64
- name: created_date
dtype: string
- name: chain
dtype: string
- name: token_type
dtype: string
- name: asset_contract_type
dtype: string
- name: asset_type
dtype: string
- name: payout_collection_address
dtype: int64
- name: from_account
dtype: int64
- name: to_account
dtype: int64
- name: seller_account
dtype: int64
- name: winner_account
dtype: int64
- name: contract_address
dtype: int64
- name: nft_image
dtype: int64
- name: collection_image
dtype: int64
- name: token_id
dtype: int64
- name: nft_name
dtype: int64
- name: nft_description
dtype: int64
- name: collection_name
dtype: int64
- name: collection_description
dtype: int64
splits:
- name: train
num_bytes: 21291348001
num_examples: 70972143
download_size: 6633664673
dataset_size: 21291348001
size_categories:
- 10M<n<100M
license: cc-by-nc-4.0
task_categories:
- time-series-forecasting
- text-classification
- feature-extraction
- text-generation
- zero-shot-classification
- text2text-generation
- sentence-similarity
- image-classification
- image-to-text
- text-to-image
- text-retrieval
language:
- en
tags:
- Non-fungible Tokens
- Crypto
- Web3
- Art
- Multimodal Learning
pretty_name: NFT-70M_transactions
---
# Dataset Card for "NFT-70M_transactions"
## Dataset summary
The *NFT-70M_transactions* dataset is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from [OpenSea](https://opensea.io), the leading trading platform in the Web3 ecosystem.
With more than 70M transactions enriched with metadata, this dataset is conceived to support a wide range of tasks, ranging from sequential and transactional data processing/analysis to graph-based modeling of the complex relationships between traders.
Besides, the availability of textual and image contents further amplifies the modeling capabilities and usage opportunities of this dataset, making it a unique and comprehensive multimodal source of information for delving into the NFT landscape.
This dataset can serve as a benchmark for various innovative and impactful tasks within the crypto landscape, such as projecting NFT prices or detecting fraudolent and wash trading activities.
Furthermore, the multimodal nature of the dataset fosters the development of classification models, as well as textual and visual generative models.
## Data anonymization
We point out that the collected NFT transactions and metadata from OpenSea are publicly distributed on blockchain.
For our purposes of re-distribution, we are also committed to ensure non-disclosure of information that might lead to identifying the NFT creators, in order to be compliant with privacy-preserving requirements and to avoid violation of data protection regulations and of property rights.
In this respect, we carried out three actions:
- Values of all variables describing non-sensitive information were kept in their original form;
- Values of all variables describing sensitive information were anonymized, in a one-way, non-revertible mode;
- URLs of image data and textual contents (i.e., NFT images and their descriptions) were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings) of the image/text contents obtained via neural network models. Such embeddings are eventually provided in place of their original image and text data,
and can be found in the [**NFT-70M_image**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_image) and [**NFT-70M_text**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_text) supplementary datasets, respectively.
## Data Fields
| Variable | Type | Description | Processing | Notes |
|--------------------------|-------------|-----------------------------------------------------------------------------------------------------------|------------------|-----------------------------------|
| token_id | String | The id of the NFT — this value is unique within the same collection | Anonymized | Original values were replaced by hash-codes |
| num_sales | Integer | A progressive integer indicating the number of successful transactions involving the NFT up to the current timestamp (cf. *tx_timestamp*) | Original | Not sensitive variable |
| nft_name | Vector ID | The name of the NFT | Anonymized | Original values were encrypted via neural textual embedding |
| nft_description | Vector ID | The description of the NFT as provided by the creator | Anonymized | Original values were encrypted via neural textual embedding |
| nft_image | Vector ID | The ID for accessing the NFT image vector | Anonymized | Original values were encrypted via neural visual embedding |
| collection_name | Vector ID | The ID for accessing the Collection name vector | Anonymized | Original values were encrypted via neural textual embedding |
| collection_description | Vector ID | The ID for accessing the Collection description vector | Anonymized | Original values were encrypted via neural textual embedding |
| collection_image | Vector ID | The ID for accessing the Collection image vector | Anonymized | Original values were encrypted via neural visual embedding |
| fees_seller | Float | The absolute amount of fees the seller has gained from this transaction expressed in *token* | Original | Not sensitive variable |
| fees_opensea | Float | The absolute amount of fees OpenSea has gained from this transaction expressed in *token* | Original | Not sensitive variable |
| fees_seller_usd | Float | The absolute amount of fees the seller has gained from this transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| fees_opensea_usd | Float | The absolute amount of fees OpenSea has gained from this transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| payout_collection_address| String | The wallet address where seller fees are deposited | Anonymized | Original values were replaced by hash-codes |
| tx_timestamp | String | Timestamp of the transaction expressed in yyyy-mm-ddTHH:MM:SS | Original | Not sensitive variable |
| price | Float | The price of the transaction expressed in token | Original | Not sensitive variable |
| gain | Float | The gain after fees (i.e., gain = price - fees_opensea * price - fees_seller * price) | Original | Not sensitive variable |
| usd_price | Float | The price of the transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| usd_gain | Float | The difference between the price and the fees expressed in US dollars (USD) | Original | Not sensitive variable |
| token | Categorical | The token type used to pay the transaction | Original | Not sensitive variable |
| to_eth | Float | The conversion rate to convert tokens into Ethereum at the current timestamp, such that eth = price * to_eth | Original | Not sensitive variable |
| to_usd | Float | The conversion rate to convert tokens into US dollars (USD) at the current timestamp, such that usd = price * to_usd | Original | Not sensitive variable |
| from_account | String | The address that sends the payment (i.e., winner/buyer) | Anonymized | Original values were replaced by hash-codes |
| to_account | String | The address that receives the payment (it often corresponds to the contract linked to the asset) | Anonymized | Original values were replaced by hash-codes |
| seller_account | String | The address of the NFT seller | Anonymized | Original values were replaced by hash-codes |
| winner_account | String | The address of the NFT buyer | Anonymized | Original values were replaced by hash-codes |
| contract_address | String | The contract address on the blockchain | Anonymized | Original values were replaced by hash-codes |
| created_date | Timestamp | The date of creation of the contract | Original | Not sensitive variable |
| chain | Categorical | The blockchain where the transaction occurs | Original | Not sensitive variable |
| token_type | Categorical | The schema of the token, i.e., ERC721 or ERC1155 | Original | Not sensitive variable |
| asset_contract_type | Categorical | The asset typology, i.e., non-fungible or semi-fungible | Original | Not sensitive variable |
| asset_type | Categorical | Whether the asset was involved in a simple or bundle transaction | Original | Not sensitive variable |
## How to use
Data provided within this repository can be straightforwardly loaded via the *datasets* library as follows:
```python
from datasets import load_dataset
dataset = load_dataset("MLNTeam-Unical/NFT-70M_transactions")
```
Complementary data involving textual and visual embeddings can be integrated as follows:
```python
from datasets import load_dataset
import numpy as np
transactions_dataset=load_dataset("MLNTeam-Unical/NFT-70M_transactions")
image_dataset=load_dataset("MLNTeam-Unical/NFT-70M_image")
text_dataset=load_dataset("MLNTeam-Unical/NFT-70M_text")
# Mapping from image_id to the row_index within the image dataset
image_id2row_index={int(id):k for k,id in enumerate(image_dataset["train"]["id"])}
# Mapping from text_id to row_index within the text dataset
text_id2row_index={int(id):k for k,id in enumerate(text_dataset["train"]["id"])}
def get_image_embedding(image_id,image_id2row_index,image_dataset):
# If the mapping contains the image, the embedding exists
idx_emb=image_id2row_index.get(int(image_id),None)
if idx_emb:
# If the embedding exists, return it
return np.array(image_dataset["train"].select([idx_emb])["emb"][0])
else:
return None
def get_text_embedding(text_id,text_id2row_index,text_dataset):
# If the mapping contains the text, the embedding exists
idx_emb=text_id2row_index.get(int(text_id),None)
if idx_emb:
# If the embedding exists, return it
return np.array(text_dataset["train"].select([idx_emb])["emb"][0])
else:
return None
### USAGE EXAMPLE ###
# Select transaction_id
transaction_id=120
# Get the image_id (e.g., collection_image or nft_image)
id_image=transactions_dataset["train"].select([transaction_id])["collection_image"][0]
# Get the image
image_embedding=get_image_embedding(id_image,image_id2row_index,image_dataset)
# Get the text_id
id_text=transactions_dataset["train"].select([transaction_id])["collection_description"][0]
# Get the text
text_embedding=get_text_embedding(id_text,text_id2row_index,text_dataset)
```
## Ethical use of data and informed consent
This data repository is made available for research and informational purposes only.
Any finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists.
*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*
## Terms of Usage
Please cite the following papers in any research product whose findings are based on the data provided within this repository:
- L. La Cava, D. Costa, A. Tagarelli: SONAR: Web-based Tool for Multimodal Exploration of Non-Fungible Token Inspiration Networks. In: Proc. ACM SIGIR 2023. Taipei, Taiwan, July 23-27 2023. DOI: https://doi.org/10.1145/3539618.3591821
- L. La Cava, D. Costa, A. Tagarelli: Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens. CoRR abs/2303.17031 (2023). DOI: https://doi.org/10.48550/arXiv.2303.17031
- D. Costa, L. La Cava, A. Tagarelli: Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction. In: Proc. ACM WebConf 2023, pp. 1875-1885. Austin, TX, USA, 30 April 2023 – 4 May 2023. DOI: https://doi.org/10.1145/3543507.3583520
Data within this repository were fetched using the REST APIs provided by OpenSea. You should also acknowledge [OpenSea API]("https://docs.opensea.io/reference/api-overview).
## Liability statement
The authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository.
Users of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:
(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards;
(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.
The authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset.
*By accessing and using this dataset, users acknowledge and accept this disclaimer.* |
Pizzaface/glaive-function-calling-llama | 2023-09-28T03:36:47.000Z | [
"license:wtfpl",
"region:us"
] | Pizzaface | null | null | null | 0 | 28 | ---
license: wtfpl
---
This dataset is a work in progress. Use with caution. |
jscode13/mars-data | 2023-10-01T01:54:12.000Z | [
"region:us"
] | jscode13 | null | null | null | 0 | 28 | Entry not found |
FelixdoingAI/IP2P-5000 | 2023-10-02T07:35:04.000Z | [
"region:us"
] | FelixdoingAI | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 430361394.0
num_examples: 5000
download_size: 430263289
dataset_size: 430361394.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "instructpix2pix-clip-filtered5000-samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fahrialfiansyah/openstax-sample | 2023-10-03T14:47:58.000Z | [
"region:us"
] | fahrialfiansyah | null | null | null | 0 | 28 | Entry not found |
Intuit-GenSRF/jigsaw-toxic-comment | 2023-10-04T23:28:45.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 64586545
num_examples: 159571
download_size: 41105413
dataset_size: 64586545
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jigsaw-toxic-comment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
euclaise/gsm8k_self_correct | 2023-10-08T17:25:24.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"cot",
"self-correct",
"region:us"
] | euclaise | null | null | null | 0 | 28 | ---
license: mit
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: mistake
dtype: string
- name: correct_end
dtype: string
splits:
- name: train
num_bytes: 4981250
num_examples: 5319
download_size: 2773791
dataset_size: 4981250
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- cot
- self-correct
---
# Dataset Card for "gsm8k_self_correct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
swj0419/WikiMIA | 2023-10-09T23:32:54.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"arxiv:2308.04430",
"region:us"
] | swj0419 | null | null | null | 0 | 28 | ---
license: mit
language:
- en
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: input
dtype: string
- name: label
dtype: int64
splits:
- name: WikiMIA_length32
num_bytes: 162091
num_examples: 776
- name: WikiMIA_length64
num_bytes: 221018
num_examples: 542
- name: WikiMIA_length128
num_bytes: 205118
num_examples: 250
- name: WikiMIA_length256
num_bytes: 134879
num_examples: 82
download_size: 465221
dataset_size: 723106
---
# 📘 WikiMIA Datasets
The **WikiMIA datasets** serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models.
### 📌 Applicability
The datasets can be applied to various models released between **2017 to 2023**:
- LLaMA1/2
- GPT-Neo
- OPT
- Pythia
- text-davinci-001
- text-davinci-002
- ... and more.
## Loading the datasets
To load the dataset:
```python
from datasets import load_dataset
LENGTH = 64
dataset = load_dataset("swj0419/WikiMIA", split=f"WikiMIA_length{LENGTH}")
```
* Available Text Lengths: `32, 64, 128, 256`.
* *Label 0*: Refers to the unseen data during pretraining. *Label 1*: Refers to the seen data.
## 🛠️ Codebase
For evaluating MIA methods on our datasets, visit our [GitHub repository](https://github.com/swj0419/detect-pretrain-code).
## ⭐ Citing our Work
If you find our codebase and datasets beneficial, kindly cite our work:
```bibtex
@article{ ...,
title={ .... },
author={... },
year={ 2023 },
journal={ arXiv preprint arXiv:2308.04430 },
url={ https://arxiv.org/abs/2308.04430 }
}
```
|
RIW/small-coco-wm_50_2 | 2023-10-08T03:32:30.000Z | [
"region:us"
] | RIW | null | null | null | 0 | 28 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: 'null'
- name: width
dtype: int64
- name: height
dtype: int64
- name: original_width
dtype: int64
- name: original_height
dtype: int64
- name: exif
dtype: string
- name: sha256
dtype: string
splits:
- name: train
num_bytes: 781729596.182
num_examples: 8362
- name: validation
num_bytes: 851865993.632
num_examples: 8514
download_size: 554825307
dataset_size: 1633595589.8140001
---
# Dataset Card for "small-coco-wm_50_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
conv_questions | 2023-06-02T12:18:49.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:open-domain-qa",
"task_ids:dialogue-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source... | null | ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs.
It contains 11,200 conversations which can be evaluated over Wikidata. The questions feature a variety of complex
question phenomena like comparisons, aggregations, compositionality, and temporal reasoning. | @InProceedings{christmann2019look,
title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion},
author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard},
booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},
pages={729--738},
year={2019}
} | null | 3 | 27 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
language_bcp47:
- en-US
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
- text-generation
- fill-mask
task_ids:
- open-domain-qa
- dialogue-modeling
pretty_name: ConvQuestions
dataset_info:
features:
- name: domain
dtype: string
- name: seed_entity
dtype: string
- name: seed_entity_text
dtype: string
- name: questions
sequence: string
- name: answers
sequence:
sequence: string
- name: answer_texts
sequence: string
splits:
- name: train
num_bytes: 3589880
num_examples: 6720
- name: validation
num_bytes: 1241778
num_examples: 2240
- name: test
num_bytes: 1175656
num_examples: 2240
download_size: 3276017
dataset_size: 6007314
---
# Dataset Card for ConvQuestions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ConvQuestions page](https://convex.mpi-inf.mpg.de)
- **Repository:** [GitHub](https://github.com/PhilippChr/CONVEX)
- **Paper:** [Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion](https://arxiv.org/abs/1910.03262)
- **Leaderboard:** [ConvQuestions leaderboard](https://convex.mpi-inf.mpg.de)
- **Point of Contact:** [Philipp Christmann](mailto:pchristm@mpi-inf.mpg.de)
### Dataset Summary
ConvQuestions is the first realistic benchmark for conversational question answering over
knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata.
They are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk,
with conversations from five domains: Books, Movies, Soccer, Music, and TV Series.
The questions feature a variety of complex question phenomena like comparisons, aggregations,
compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable
fair comparison across diverse methods. The data gathering setup was kept as natural as
possible, with the annotators selecting entities of their choice from each of the five domains,
and formulating the entire conversation in one session. All questions in a conversation are
from the same Turker, who also provided gold answers to the questions. For suitability to knowledge
graphs, questions were constrained to be objective or factoid in nature, but no other restrictive
guidelines were set. A notable property of ConvQuestions is that several questions are not
answerable by Wikidata alone (as of September 2019), but the required facts can, for example,
be found in the open Web or in Wikipedia. For details, please refer to the CIKM 2019 full paper
(https://dl.acm.org/citation.cfm?id=3358016).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
en
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'domain': 'music',
'seed_entity': 'https://www.wikidata.org/wiki/Q223495',
'seed_entity_text': 'The Carpenters',
'questions': [
'When did The Carpenters sign with A&M Records?',
'What song was their first hit?',
'When did Karen die?',
'Karen had what eating problem?',
'and how did she die?'
],
'answers': [
[
'1969'
],
[
'https://www.wikidata.org/wiki/Q928282'
],
[
'1983'
],
[
'https://www.wikidata.org/wiki/Q131749'
],
[
'https://www.wikidata.org/wiki/Q181754'
]
],
'answer_texts': [
'1969',
'(They Long to Be) Close to You',
'1983',
'anorexia nervosa',
'heart failure'
]
}
```
### Data Fields
- `domain`: a `string` feature. Any of: ['books', 'movies', 'music', 'soccer', 'tv_series']
- `seed_entity`: a `string` feature. Wikidata ID of the topic entity.
- `seed_entity_text`: a `string` feature. Surface form of the topic entity.
- `questions`: a `list` of `string` features. List of questions (initial question and follow-up questions).
- `answers`: a `list` of `lists` of `string` features. List of answers, given as Wikidata IDs or literals (e.g. timestamps or names).
- `answer_texts`: a `list` of `string` features. List of surface forms of the answers.
### Data Splits
|train|validation|tests|
|----:|---------:|----:|
| 6720| 2240| 2240|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
With insights from a meticulous in-house pilot study with ten students over two weeks, the authors posed the conversation generation task on Amazon Mechanical Turk (AMT) in the most natural setup: Each crowdworker was asked to build a conversation by asking five sequential questions starting from any seed entity of his/her choice, as this is an intuitive mental model that humans may have when satisfying their real information needs via their search assistants.
#### Who are the annotators?
Local students (Saarland Informatics Campus) and AMT Master Workers.
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The ConvQuestions benchmark is licensed under a Creative Commons Attribution 4.0 International License.
### Citation Information
```
@InProceedings{christmann2019look,
title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion},
author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard},
booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management},
pages={729--738},
year={2019}
}
```
### Contributions
Thanks to [@PhilippChr](https://github.com/PhilippChr) for adding this dataset. |
DanL/scientific-challenges-and-directions-dataset | 2022-10-25T08:56:00.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:CORD-19",
"language:en",
"arxiv:2108.13751",
"arxiv:2004.10706",
"region:us"
] | DanL | null | null | null | 2 | 27 | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators: []
language:
- en
license: []
multilinguality:
- monolingual
pretty_name: DanL/scientific-challenges-and-directions-dataset
source_datasets:
- CORD-19
task_categories:
- text-classification
task_ids:
- multi-label-classification
---
# Dataset Card for scientific-challenges-and-directions
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository: [repo](https://github.com/Dan-La/scientific-challenges-and-directions)**
- **Paper: [A Search Engine for Discovery of Scientific Challenges and Directions](https://arxiv.org/abs/2108.13751)**
- **Point of Contact: lahav@mail.tau.ac.il,tomh@allenai.org**
### Dataset Summary
The scientific challenges and directions dataset is a collection of 2894 sentences and their surrounding contexts, from 1786 full-text papers in the [CORD-19](https://arxiv.org/abs/2004.10706) corpus, labeled for classification of _challenges_ and _directions_ by expert annotators with biomedical and bioNLP backgrounds.
At a high level, our labels are defined as follows:
* **Challenge**: A sentence mentioning a problem, difficulty, flaw, limitation, failure, lack of clarity, or knowledge gap.
* **Research direction**: A sentence mentioning suggestions or needs for further research, hypotheses, speculations, indications or hints that an issue is worthy of exploration.
The dataset was developed to help scientists and medical professionals discover challenges and potential directions across scientific literature.
### Languages
The language in the dataset is English as written by authors of the scientific papers in the CORD-19 corpus.
## Dataset Structure
### Data Instances
For each instance, there is a unique id, a string for the text sentence, a string for the previous sentence, a string for the next sentence, and a list for the challenge and direction labels.
```
{'id': 'PMC7152165_152',
'label': [0.0, 0.0],
'next_sent': 'The railways brought a new technology and vast engineering and architectural structures into Britain’s rural and urban landscapes.',
'prev_sent': 'In Britain, improvements in coaching technologies and roads helped to increase stage coach speeds in the late eighteenth and early nineteenth centuries, while the railway construction boom of the 1830s and 1840s led to a massive reduction in journey times, and the emergence of distinctly new experiences and geographies.',
'text': 'Britain’s railway companies were among the nation’s largest employers in the nineteenth century, and they facilitated the mobility of passengers and important commodities.'}
```
### Data Fields
* id: A string as a unique id for the instance. The id is composed of the unique PMC id of the paper, an underscore, and the index of the sentence within the paper.
* next_sent_: A string of a sentence that is following the _text_ of the instance. If the text is the first in its paragraph the string is saved as '|'.
* prev_sent_: A string of a sentence that is preceding the _text_ of the instance. If the text is the first in its paragraph the string is saved as '|'.
* text: A string of the sentence we seek to classify.
* label: A list of 2 values - the first is the label for _challenge_ and the last of _direction_. Each value may be either 0, indicating that the _text_ is **not** _challenge_ or _direction_, or 1, indicating that the the _text_ is _challenge_ or _direction_. Each instance can be a _challenge_, a _direction_, both, or neither.
### Data Splits
The scientific-challenges-and-directions dataset has 3 splits: _train_, _dev_, and _test_. Each instances shows up in only one split. The splits are stratified with no overlap in papers.
| Labels | Train | Dev | Test | All |
|:----------------------------:|:------:|:-----:|:----:|:----:|
| Not Challenge, Not Direction | 602 | 146 | 745 | 1493 |
| Not Challenge, Direction | 106 | 25 | 122 | 253 |
| Challenge, Not Direction | 288 | 73 | 382 | 743 |
| Challenge, Direction | 155 | 40 | 210 | 405 |
## Dataset Creation
### Curation Rationale
The resource was developed to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research.
### Source Data
#### Initial Data Collection and Normalization
See section 3.1 in our [paper](https://arxiv.org/abs/2108.13751).
#### Who are the source language producers?
The authors of the subset of full-text papers in the [CORD-19 dataset](https://arxiv.org/abs/2004.10706), which at the time of creating our dataset included roughly 180K documents.
### Annotations
#### Annotation process
See section 3.1 in our [paper](https://arxiv.org/abs/2108.13751).
#### Who are the annotators?
Four expert annotators with biomedical and bioNLP backgrounds. For more details see section 3.1 in our [paper](https://arxiv.org/abs/2108.13751).
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or annotators.
## Considerations for Using the Data
### Social Impact of Dataset
As mentioned, the dataset was developed to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research.
Studies were conducted to evaluate the utility of the dataset for researchers and medical professionals, in which a prototype based on the dataset was found to outperform other biomedical search tools. For more details see section 4 in our [paper](https://arxiv.org/abs/2108.13751).
This dataset was also developed for evaluating representational systems for scientific text classification and can be used as such.
### Discussion of Biases
The source of the dataset is the full-text papers in the [CORD-19 dataset](https://arxiv.org/abs/2004.10706), so biases in CORD-19 may be replicated to our dataset.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
The dataset was developed by Dan Lahav, Jon Saad Falcon, Bailey Kuehl, Sophie Johnson, Sravanthi Parasa, Noam Shomron, Duen Horng Chau, Diyi Yang, Eric Horvitz, Daniel S. Weld and Tom Hope as part of _Tel Aviv University_, the _Allen Institute for AI_, _University of Washington_, _Georgia Institute of Technology_, _Microsoft_ and _Swedish Medical Group_.
It was supported by the Edmond J. Safra Center for Bioinformatics at Tel-Aviv University, ONR grant N00014-18-1-2193, NSF RAPID grant 2040196, the WR-F/Cable Professorship, and AI2.
### Licensing Information
[More Information Needed]
### Citation Information
If using our dataset and models, please cite:
```
@misc{lahav2021search,
title={A Search Engine for Discovery of Scientific Challenges and Directions},
author={Dan Lahav and Jon Saad Falcon and Bailey Kuehl and Sophie Johnson and Sravanthi Parasa and Noam Shomron and Duen Horng Chau and Diyi Yang and Eric Horvitz and Daniel S. Weld and Tom Hope},
year={2021},
eprint={2108.13751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@Dan-La](https://github.com/Dan-La) and [@tomhoper](https://github.com/tomhoper) for adding this dataset.
|
GEM/common_gen | 2022-10-24T15:30:11.000Z | [
"task_categories:other",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:mit",
"reasoning",
"arxiv:1911.03705",
"arxiv:1910.13461",
"arxiv:2009.12677",
"arxiv:2012.00366",
"a... | GEM | CommonGen is a constrained text generation task, associated with a benchmark
dataset, to explicitly test machines for the ability of generative commonsense
reasoning. Given a set of common concepts; the task is to generate a coherent
sentence describing an everyday scenario using these concepts. | @inproceedings{lin-etal-2020-commongen,
title = "{C}ommon{G}en: A Constrained Text Generation Challenge for Generative Commonsense Reasoning",
author = "Lin, Bill Yuchen and
Zhou, Wangchunshu and
Shen, Ming and
Zhou, Pei and
Bhagavatula, Chandra and
Choi, Yejin and
Ren, Xiang",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.165",
pages = "1823--1840",
} | null | 0 | 27 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- mit
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: common_gen
tags:
- reasoning
---
# Dataset Card for GEM/common_gen
## Dataset Description
- **Homepage:** https://inklab.usc.edu/CommonGen/
- **Repository:** https://github.com/INK-USC/CommonGen
- **Paper:** https://aclanthology.org/2020.findings-emnlp.165
- **Leaderboard:** https://inklab.usc.edu/CommonGen/leaderboard.html
- **Point of Contact:** Bill Yuchen Lin
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/common_gen).
### Dataset Summary
CommonGen is an English text generation task to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts, the task is to generate a coherent sentence describing an everyday scenario using these concepts. CommonGen is challenging because it inherently requires 1) relational reasoning using background commonsense knowledge, and 2) compositional generalization ability to work on unseen concept combinations. The dataset, constructed through a combination of crowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and 50k sentences in total. Note that the CommonGen test set is private and requires submission to the external leaderboard.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/common_gen')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/common_gen).
#### website
[link](https://inklab.usc.edu/CommonGen/)
#### paper
[Link](https://aclanthology.org/2020.findings-emnlp.165)
#### authors
Bill Yuchen Lin (USC), Wangchunshu Zhou (USC), Ming Shen (USC), Pei Zhou (USC), Chandra Bhagavatula (AllenAI), Yejin Choi (AllenAI + UW), Xiang Ren (USC)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[link](https://inklab.usc.edu/CommonGen/)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Link](https://github.com/INK-USC/CommonGen)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[Link](https://aclanthology.org/2020.findings-emnlp.165)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{lin-etal-2020-commongen,
title = "{C}ommon{G}en: A Constrained Text Generation Challenge for Generative Commonsense Reasoning",
author = "Lin, Bill Yuchen and
Zhou, Wangchunshu and
Shen, Ming and
Zhou, Pei and
Bhagavatula, Chandra and
Choi, Yejin and
Ren, Xiang",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.165",
pages = "1823--1840",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Bill Yuchen Lin
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
yuchen.lin@usc.edu
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
yes
#### Leaderboard Link
<!-- info: Provide a link to the leaderboard. -->
<!-- scope: periscope -->
[Link](https://inklab.usc.edu/CommonGen/leaderboard.html)
#### Leaderboard Details
<!-- info: Briefly describe how the leaderboard evaluates models. -->
<!-- scope: microscope -->
The model outputs are evaluated against the crowdsourced references, and ranked by SPICE score. The leaderboard also reports BLEU-4 and CIDEr scores.
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
No information is provided on regional restrictions and we thus assume that the covered dialects are those spoken by raters on Mechanical Turk.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The concepts were extracted from multiple English image captioning datasets and the data was collected via Amazon Mechanical Turk. No information on regional restrictions is provided.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
mit: MIT License
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
CommonGen is a constrained text generation task, associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Reasoning
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
The speaker is required to produce a *coherent* sentence which mentions all of the source concepts, and which describes a *likely* situation that could be captured in a picture or video.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`, `independent`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
The dataset was curated by a joint team of researchers from the University of Southern California and Allen Institute for Artificial Intelligence.
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Bill Yuchen Lin (USC), Wangchunshu Zhou (USC), Ming Shen (USC), Pei Zhou (USC), Chandra Bhagavatula (AllenAI), Yejin Choi (AllenAI + UW), Xiang Ren (USC)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
The research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), the DARPA MCS program, and NSF SMA 18-29268.
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Yacine Jernite created the initial data card. It was later extended by Simon Mille. Sebastian Gehrmann migrated it to the GEMv2 format.
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
A data instance has the following fields:
- `concepts`: a `list` of `string` values denoting the concept the system should write about. Has 3 to 5 items, constitutes the `input` of the task.
- `target`: a sentence `string` mentioning all of the above mentioned `concepts`. Constitutes the desired `output` of the task.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
[
{
"concepts": ['ski', 'mountain', 'skier'],
"target": 'Skier skis down the mountain',
},
{
"concepts": ['ski', 'mountain', 'skier'],
"target": 'Three skiers are skiing on a snowy mountain.',
},
]
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
Each example in the dataset consists of a set of 3 to 5 concepts denoted by a single noun, verb, or adjective (the input), and a sentence using these concepts (the output). The dataset provides several such sentences for each such concept.
| | Train | Dev | Test |
|---------------------------|--------|-------|-------|
| **Total concept-sets** | 32,651 | 993 | 1,497 |
| **Total sentences** | 67,389 | 4,018 | 6,042 |
|**Average sentence length**| 10.54 | 11.55 | 13.34 |
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The dev and test set were created by sampling sets of concepts of size 4 or 5 (and as many of size 3 for the dev set) present in the source captioning datasets and having crowd-workers write reference sentences using these concepts.
Conversely, the training set has more concept sets of size 3 than of size 4 and 5, and uses the original captions from the source datasets as references.
The authors also ensured that the training, dev and test set have different combinations of unique concepts to ensure compositionality (details in [Table 1](https://arxiv.org/pdf/1911.03705v3.pdf)).
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
CommonGen is a medium sized corpus with a unique reasoning challenge and interesting evaluation possibilities.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
Commonsense reasoning
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
4 challenge sets for CommenGen were added to the GEM evaluation suite.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
yes
#### Split Information
<!-- info: Describe how the new splits were created -->
<!-- scope: periscope -->
1. Data Shift
We created subsets of the training and development sets of ~500 randomly selected inputs each.
2. Transformations
We applied input scrambling on a subset of 500 randomly selected test instances; the order of the concepts was randomly reassigned.
3. Subpopulations
We created a subpopulation based on input length, taking into account the number of concepts the input test structures. By comparing inputs of different lengths, we can see to what extent systems are able to handle different input sizes
| Concept number | Frequency English |
|----------------|-------------------|
| 4 | 747 |
| 5 | 750 |
#### Split Motivation
<!-- info: What aspects of the model's generation capacities were the splits created to test? -->
<!-- scope: periscope -->
Generalization and Robustness
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
- Two variants of [BART](https://arxiv.org/abs/1910.13461), [Knowledge Graph augemnted-BART](https://arxiv.org/abs/2009.12677) and [Enhanced Knowledge Injection Model for Commonsense Generation](https://arxiv.org/abs/2012.00366), hold the top two spots on the leaderboard, followed by a fine-tuned [T5 model](https://arxiv.org/abs/1910.10683).
- The following script shows how to download and load the data, fine-tune, and evaluate a model using the ROUGE, BLEU, and METEOR metrics: [GEM sample script](https://github.com/GEM-benchmark/GEM-baseline-models/blob/main/examples/GEM-common_gen.ipynb).
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Commonsense Reasoning
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`Other: Other Metrics`, `BLEU`, `ROUGE`, `METEOR`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
- SPICE: An evaluation metric for image captioning that is defined over scene graphs
- CIDEr: An n-gram overlap metric based on cosine similarity between the TF-IDF weighted ngram counts
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The main metrics are captioning metrics since the original concept lists were extracted from captioning datasets. A human subject study with five graduate students was conducted and they were asked to rank the "commonsense plausibility" of two models at a time.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
The currently best performing model KFCNet (https://aclanthology.org/2021.findings-emnlp.249/) uses the same automatic evaluation but does not conduct any human evaluation.
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
The most relevant results can be seen on the [leaderboard](https://inklab.usc.edu/CommonGen/leaderboard.html)
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset creators selected sets of concepts that appeared in image and video captions (as identified by a POS tagger) to ensure that a likely real-world scenario including the set could be imagined and constructed. Section 3.1 of the [paper](https://arxiv.org/pdf/1911.03705v3.pdf) describes a sampling scheme which encourages diversity of sets while selecting common concepts.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The speaker is required to produce a *coherent* sentence which mentions all of the source concepts, and which describes a *likely* situation that could be captured in a picture or video.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
yes
#### Source Details
<!-- info: List the sources (one per line) -->
<!-- scope: periscope -->
- [Flickr30k](https://www.mitpressjournals.org/doi/abs/10.1162/tacl_a_00166)
- [MSCOCO](https://link.springer.com/chapter/10.1007/978-3-319-10602-1_48)
- [Conceptual Captions](https://www.aclweb.org/anthology/P18-1238/)
- Video captioning datasets:
- [LSMDC](https://link.springer.com/article/10.1007/s11263-016-0987-1)
- [ActivityNet](https://openaccess.thecvf.com/content_iccv_2017/html/Krishna_Dense-Captioning_Events_in_ICCV_2017_paper.html)
- [VaTeX](https://openaccess.thecvf.com/content_ICCV_2019/html/Wang_VaTeX_A_Large-Scale_High-Quality_Multilingual_Dataset_for_Video-and-Language_Research_ICCV_2019_paper.html)
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Crowdsourced`
#### Where was it crowdsourced?
<!-- info: If crowdsourced, where from? -->
<!-- scope: periscope -->
`Amazon Mechanical Turk`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The training data consists of concept sets and captions for the source datasets. The concept sets are the sets of labels of the images or videos, selected with a heuristic to maximize diversity while ensuring that they represent likely scenarios.
The dev and test set sentences were created by Amazon Mechanical Turk crowd workers. The workers were shown an example generation and a set of 4 or 5 concept names along with their part-of-speech and asked to write:
1. One sentence mentioning all of the concepts
2. A rationale explaining how the sentence connects the concept
A screenshot of the interface is provided in Figure 7 of the [Appendix](https://arxiv.org/pdf/1911.03705v3.pdf).
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
Information was not provided.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by data curator
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
algorithmically
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
During the data collection, workers who provided rationales that were too short, failed to have good coverage of the input in their sentences, or workers whose output had a high perplexity under a GPT-2 model were disqualified from the pool and replaced with newcomers.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
The data was sourced from Mechanical Turk which means that raters were aware that their annotations may be publicly released for research purposes.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
The concepts are restricted to verbs, adjectives, and common nouns, and no personal information is given in the captions.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The dataset is created using data from image captioning systems and might inherit some of the social biases represented therein (see e.g. [Tang et al. 2020](https://arxiv.org/abs/2006.08315)).
Another related concern is the exposure bias introduced by the initial selection of pictures and video, which are likely to over-represent situations that are common in the US at the expense of other parts of the world (Flickr, for example, is a US-based company founded in Canada). For more discussion of the potential impacts of exposure bias, see e.g. [The Social Impact of Natural Language Processing](https://www.aclweb.org/anthology/P16-2096.pdf).
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
The concepts are restricted to verbs, adjectives, and common nouns, and no personal information is given in the captions.
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
The dataset is in English, a language with an abundance of existing resources.
The use of GPT-2 to validate development ant test sentences [might be cause for similar concern](https://www.aclweb.org/anthology/D19-1339.pdf), but we do note that the authors only use the model to discount very high perplexity sequences which is less likely to surface those biases.
The language in the development and test set is crowdsourced, which means that it was written by workers whose main goal was speed. This is likely to impact the quality and variety of the targets. The population of crowdsource workers is also not identically distributed as the the base population of the locations the workers come from, which may lead to different representation of situations or underlying expectations of what these situations are.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
Due to the overrepresentation of US-situations, the system may not work for users across the world. Moreover, only limited information on the dataset quality are provided and the system may fail as a result of unknown issues.
#### Discouraged Use Cases
<!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
<!-- scope: microscope -->
Any system needs to be evaluated on a broader set of unseen concepts then provided in the dataset. Since the references for the test set are private, it is not known how well findings generalize beyond the collection methodology.
|
sentence-transformers/reddit-title-body | 2021-10-19T09:20:35.000Z | [
"region:us"
] | sentence-transformers | null | null | null | 5 | 27 | # Reddit (Title, Body)-Pairs
This dataset contains jsonl-Files about (title, body) pairs from Reddit. Each line is a JSON object of the following format:
```
{'title': 'The title of a thread', 'body': 'The longer body of the thread', 'subreddit': 'subreddit_name'}
```
The 2021 file contains submissions up until including 2021-06. Entries in the respective files are shuffled on a monthly basis.
The data has been filtered for:
- Remove threads with an upvote_ratio < 0.5
- Only include threads with a title more than 25 characters and bodies with `len(title)+25 < len(body) < 4096`
- Only keep threads with at least 3 comments or at least 3 upvotes.
## Overview
| File | Lines |
| --- | :---: |
| reddit_title_text_2010.jsonl.gz | 431,782
| reddit_title_text_2011.jsonl.gz | 1,673,264
| reddit_title_text_2012.jsonl.gz | 3,727,526
| reddit_title_text_2013.jsonl.gz | 5,713,956
| reddit_title_text_2014.jsonl.gz | 8,538,976
| reddit_title_text_2015.jsonl.gz | 11,064,453
| reddit_title_text_2016.jsonl.gz | 12,224,789
| reddit_title_text_2017.jsonl.gz | 13,558,139
| reddit_title_text_2018.jsonl.gz | 15,552,110
| reddit_title_text_2019.jsonl.gz | 19,224,970
| reddit_title_text_2020.jsonl.gz | 23,030,988
| reddit_title_text_2021.jsonl.gz | 12,704,958
Note: The data comes from [Pushshift](https://files.pushshift.io/reddit/). Please have a look at the respective license of Reddit and Pushshift before using the data.
Be aware that this dataset is not filtered for biases, hate-speech, spam, racial slurm etc. It depicts the content as it is posted on Reddit. |
jquiros/suicide | 2022-03-08T11:23:20.000Z | [
"region:us"
] | jquiros | null | null | null | 3 | 27 | Entry not found |
ai4bharat/IndicSentenceSummarization | 2022-10-13T06:08:31.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:5K<n<112K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
... | ai4bharat | This is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta and te. The total
size of the dataset is 431K. | @inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
} | null | 0 | 27 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicSentenceSummarization
size_categories:
- 5K<n<112K
source_datasets:
- original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-sentence-summarization
---
# Dataset Card for "IndicSentenceSummarization"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicSentenceSummarization is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output as summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 431K.
### Supported Tasks and Leaderboards
**Tasks:** Sentence Summarization
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '5',
'input': 'जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया।',
'target': 'जम्मू-कश्मीर : सुरक्षाबलों के साथ मुठभेड़ में 2 आतंकवादी ढेर',
'url': 'https://www.indiatv.in/india/national-jammu-kashmir-two-millitant-killed-in-encounter-with-security-forces-574529'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: Input sentence.
- `target (strings)`: Output summary.
- `url (string)`: Source web link of the sentence.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 10,812 | 5,232 | 5,452 |
Bengali | bn | 17,035 | 2,355 | 2,384 |
Gujarati | gu | 54,788 | 8,720 | 8,460 |
Hindi | hi | 78,876 | 16,935 | 16,835 |
Kannada | kn | 61,220 | 9,024 | 1,485 |
Malayalam | ml | 2,855 | 1,520 | 1,580 |
Marathi | mr | 27,066 | 3,249 | 3,309 |
Oriya | or | 12,065 | 1,539 | 1,440 |
Punjabi | pa | 31,630 | 4,004 | 3,967 |
Tamil | ta | 23,098 | 2,874 | 2,948 |
Telugu | te | 7,119 | 878 | 862 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
It is a modified subset of [IndicHeadlineGeneration](https://huggingface.co/datasets/ai4bharat/IndicHeadlineGeneration) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) |
huggan/CelebA-faces-with-attributes | 2022-04-01T08:27:55.000Z | [
"region:us"
] | huggan | null | null | null | 2 | 27 | Entry not found |
taln-ls2n/kp20k | 2023-09-13T13:15:04.000Z | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:unknown",
"keyphrase-generation",
"keyphrase-extraction",
"text-mining",
"region:us"
] | taln-ls2n | KP20k dataset for keyphrase extraction and generation in scientific paper. | @InProceedings{meng-EtAl:2017:Long,
author = {Meng, Rui and Zhao, Sanqiang and Han, Shuguang and He, Daqing and Brusilovsky, Peter and Chi, Yu},
title = {Deep Keyphrase Generation},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {July},
year = {2017},
address = {Vancouver, Canada},
publisher = {Association for Computational Linguistics},
pages = {582--592},
url = {http://aclweb.org/anthology/P17-1054}
} | null | 1 | 27 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- text-generation
task_ids: []
pretty_name: KP20k
tags:
- keyphrase-generation
- keyphrase-extraction
- text-mining
---
# KP20k Benchmark Dataset for Keyphrase Generation
## About
KP20k is a dataset for benchmarking keyphrase extraction and generation models.
The data is composed of 570 809 abstracts and their associated titles from scientific articles.
Details about the dataset can be found in the original paper:
- Meng et al 2017.
[Deep keyphrase Generation](https://aclanthology.org/P17-1054.pdf)
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 582–592
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper:
- Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text.
## Content
The dataset is divided into the following three splits:
| Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----------: | --------: | ----------: | ------: | -------: |
| Train | 530 809 | 5.29 | 58.19 | 10.93 | 17.36 | 13.52 |
| Test | 20 000 | 5.28 | 58.40 | 10.84 | 17.20 | 13.56 |
| Validation | 20 000 | 5.27 | 58.20 | 10.94 | 17.26 | 13.61 |
The following data fields are available:
- **id**: unique identifier of the document. **NB** There were no ids in the original dataset. The ids were generated using the python module shortuuid (https://pypi.org/project/shortuuid/)
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of the author assigned keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
**NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + abstract). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.