id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
sagnikrayc/adversarial_hotpotqa | 2023-08-21T22:47:53.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:afl-3.0",
"region:us"
] | sagnikrayc | This dataset is from the paper: "Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for
Multi-Hop QA" by Yichen Jiang and Mohit Bansal.
The dataset was created using the code provided in the repo: https://github.com/jiangycTarheel-zz/Adversarial-MultiHopQA. | @inproceedings{jiang-bansal-2019-avoiding,
title = "Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop {QA}",
author = "Jiang, Yichen and
Bansal, Mohit",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1262",
doi = "10.18653/v1/P19-1262",
pages = "2726--2736",
abstract = "Multi-hop question answering requires a model to connect multiple pieces of evidence scattered in a long context to answer the question. In this paper, we show that in the multi-hop HotpotQA (Yang et al., 2018) dataset, the examples often contain reasoning shortcuts through which models can directly locate the answer by word-matching the question with a sentence in the context. We demonstrate this issue by constructing adversarial documents that create contradicting answers to the shortcut but do not affect the validity of the original answer. The performance of strong baseline models drops significantly on our adversarial test, indicating that they are indeed exploiting the shortcuts rather than performing multi-hop reasoning. After adversarial training, the baseline{'}s performance improves but is still limited on the adversarial test. Hence, we use a control unit that dynamically attends to the question at different reasoning hops to guide the model{'}s multi-hop reasoning. We show that our 2-hop model trained on the regular data is more robust to the adversaries than the baseline. After adversarial training, it not only achieves significant improvements over its counterpart trained on regular data, but also outperforms the adversarially-trained baseline significantly. Finally, we sanity-check that these improvements are not obtained by exploiting potential new shortcuts in the adversarial data, but indeed due to robust multi-hop reasoning skills of the models.",
} | null | 0 | 71 | ---
license: afl-3.0
task_categories:
- question-answering
language:
- en
pretty_name: Adversarial-MultiHopQA
size_categories:
- 10K<n<100K
---
This dataset is from the paper: "Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA" by Yichen Jiang and Mohit Bansal.
The dataset was created using the code provided in the [original Github repo ](https://github.com/jiangycTarheel-zz/Adversarial-MultiHopQA).
This is the ACL citation for the paper:
```
@inproceedings{jiang-bansal-2019-avoiding,
title = "Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop {QA}",
author = "Jiang, Yichen and
Bansal, Mohit",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1262",
doi = "10.18653/v1/P19-1262",
pages = "2726--2736",
abstract = "Multi-hop question answering requires a model to connect multiple pieces of evidence scattered in a long context to answer the question. In this paper, we show that in the multi-hop HotpotQA (Yang et al., 2018) dataset, the examples often contain reasoning shortcuts through which models can directly locate the answer by word-matching the question with a sentence in the context. We demonstrate this issue by constructing adversarial documents that create contradicting answers to the shortcut but do not affect the validity of the original answer. The performance of strong baseline models drops significantly on our adversarial test, indicating that they are indeed exploiting the shortcuts rather than performing multi-hop reasoning. After adversarial training, the baseline{'}s performance improves but is still limited on the adversarial test. Hence, we use a control unit that dynamically attends to the question at different reasoning hops to guide the model{'}s multi-hop reasoning. We show that our 2-hop model trained on the regular data is more robust to the adversaries than the baseline. After adversarial training, it not only achieves significant improvements over its counterpart trained on regular data, but also outperforms the adversarially-trained baseline significantly. Finally, we sanity-check that these improvements are not obtained by exploiting potential new shortcuts in the adversarial data, but indeed due to robust multi-hop reasoning skills of the models.",
}
``` |
imoxto/prompt_injection_hackaprompt_gpt35 | 2023-08-29T13:21:20.000Z | [
"region:us"
] | imoxto | null | null | null | 0 | 71 | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 271856355
num_examples: 227042
download_size: 35972535
dataset_size: 271856355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "prompt_injection_hackaprompt_gpt35"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
smangrul/chat-instruct-mixer | 2023-09-08T05:44:19.000Z | [
"region:us"
] | smangrul | null | null | null | 2 | 71 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 169947792.7111158
num_examples: 73302
- name: test
num_bytes: 48395025.62775446
num_examples: 23318
download_size: 123606462
dataset_size: 218342818.33887026
---
# Chat-Instruct-Mixer Dataset
This dataset is focused on improving LLM logical reasoning skills and conversation skills. It is comprised of the following datasets:
| Dataset Name | Train Mixing Percentage/Samples | Test Mixing Percentage/Samples |
|--------------------------------------------------------------|--------------|------------------|
| [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) | 100% | 300 samples |
| [GAIR/lima](https://huggingface.co/datasets/GAIR/lima) | 100% | 518 samples |
| [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) | 100% minus the samples set aside for test split | 2500 samples |
| [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) | 10000 samples from GPT-4 split | 5000 samples |
| [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) | 10000 samples from GPT-4 split | 5000 samples |
| [stingning/ultrachat](https://huggingface.co/datasets/stingning/ultrachat) | 10000 samples | 5000 samples |
| [jondurbin/airoboros-2.2](https://huggingface.co/datasets/jondurbin/airoboros-2.2) | 10000 Samples while filtering out samples with `skip_prompt_formatting==True` | 5000 samples |
Code for Creating this dataset: [ToDo]()
|
openbmb/UltraFeedback | 2023-09-30T16:39:29.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
] | openbmb | null | null | null | 24 | 71 | ---
license: mit
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
## Introduction
- [GitHub Repo](https://github.com/thunlp/UltraFeedback)
- [UltraRM-13b](https://huggingface.co/openbmb/UltraRM-13b)
- [UltraCM-13b](https://huggingface.co/openbmb/UltraCM-13b)
UltraFeedback is a **large-scale, fine-grained, diverse preference dataset**, used for training powerful reward models and critic models. We collect about 64k prompts from diverse resources (including UltraChat, ShareGPT, Evol-Instruct, TruthfulQA, FalseQA, and FLAN). We then use these prompts to query multiple LLMs (see Table for model lists) and generate 4 different responses for each prompt, resulting in a total of 256k samples.
To collect high-quality preference and textual feedback, we design a fine-grained annotation instruction, which contains 4 different aspects, namely **instruction-following**, **truthfulness**, **honesty** and **helpfulness**. We then ask GPT-4 to annotate the collected samples based on the instructions.
## Features
- 🆚 **Scale**: UltraFeedback consists of 64k prompts, 256k responses and 380k high-quality feedback. RLHF researchers could further construct around 1 million comparison pairs to train their reward models.
- 🌈 **Diversity**: As a preference dataset, diversity is the core requirement for UltraFeedback. We collect prompts from various sources and query a diverse set of state-of-the-art open-source and prestigious models. To further increase diversity, we intended to select different base models, i.e., LLaMA, Falcon, StarChat, MPT, GPT and Bard. We also apply various principles to stimulate models completing instructions in different ways.
- 🤯 **High-density**: UltraFeedback provides both numerical and textual feedback. Moreover, we wrote fine-grained annotation documents to help rate responses in all dimensions
## Dataset Construction
### Instruction Sampling
We sample 63,967 instructions from 6 public available and high-quality datasets. We include all instructions from TruthfulQA and FalseQA, randomly sampling 10k instructions from Evol-Instruct, 10k from UltraChat, and 20k from ShareGPT. For Flan, we adopt a stratified sampling strtegy, randomly samping 3k instructions from"Co" subset whereas sampling 10 instructions per task for the other three subsets, excluding those with overly long instructions.
```json
{
"evol_instruct": 10000,
"false_qa": 2339,
"flan": 20939,
"sharegpt": 19949,
"truthful_qa": 811,
"ultrachat": 9929
}
```
### Model Sampling
To prevent reward model from overfiting to certain text style or capturing spurious correlation between text style and rewards, we select different base models of all levels, with varying sizes, architectures and training data, to complete the instructions. We set up a pool of 17 models:
- Commercial Models: GPT-4, GPT-3.5 Turbo, Bard
- LLaMA family:
1. LLaMA-2-7B-chat, LLaMA-2-13B-chat, LLaMA-2-70B-chat
2. UltraLM-13B, UltraLM-65B
3. WizardLM-7B, WizardLM-13B, WizardLM-70B
4. Vicuna-33B
5. Alpaca-7B
- Non-LLaMA series:
1. Falcon-40B-instruct
2. MPT-30B-chat
3. StarChat-Beta
4. Pythia-12B
### Principle Sampling
Following [1] and [2], we define a set of principles to explicitly align model behaviors from different aspects. We set up a pool of 5 principles: Helpfulness, Truthfulness, Honesty, Verbalized Calibration and Harmless. For each instruction, we randomly sample 4 models to complete the instruction, and for each completion, we sample a principle and add it to system prompt to align the model behavior. Considering different datasets outline different characteristics, not all dataset are suitable for all principles. We provide the following table to show the principle distribution for each dataset.
| Datset | Principle |
| ------------- | ------------------------------------------------------------ |
| Evol Instruct | 100% Helpful |
| FalseQA | 100% TruthfulQA |
| Flan | 60% Helpful, 20% Truthful, 20% Verbalized Calibration |
| ShareGPT | 60% Helpful, 20% Truthful, 18% Honesty, 2% Verbalized Calibration |
| TruthfulQA | 100% Truthful |
| UltraChat | 60% Helpful, 20% Truthful, 18% Honesty, 2% Verbalized Calibration |
[1] Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. Sun et al.
[2] Orca: Progressive Learning from Complex Explanation Traces of GPT-4. Mukherjee et al.
## Dataset Format
```JSONC
{
"source": "sharegpt", // the dataset where the instruction comes from
"id": "D2TC9GP_0", // the id of the instruction in the source dataset
"instruction": "I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports",
"models": [ // four models that are randomly sampled from the model pool to complete the instruction
"falcon-40b-instruct",
"gpt-4",
"starchat",
"wizardlm-7b"
],
"completions": [ // four completions corresponding to the four models
{ // completion 1
"model": "falcon-40b-instruct",
"principle": "helpfulness", // priciple that used to align model behavior
"custom_system_prompt": "As an AI assistant, ensure your response offers the perfect blend of accuracy, positivity, and intrigue. Strive to be educational while keeping the user engaged.", // the system prompt corresponding to the principle, sampled from a pool of GPT-4 generated system prompts
"response": "Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!",
"annotations": {
"instruction_following": [
{
"Rating": "2",
"Rationale": "The text only partially addresses the task goal by providing a general list of activities in Cairo without considering the specific needs of the family members, such as limited walking ability and adventure preferences."
}
],
"honesty": [
{
"Rating": "3",
"Rationale": "The response is confident but contains subtle mistakes. It suggests whitewater rafting on the Nile and diving in the Red Sea, which are not feasible for a 4-5 day trip to Cairo. The rest of the suggestions are accurate and relevant to the family's preferences."
}
],
"truthfulness": [
{
"Type": [
"1",
"2"
],
"Rationale": "The text suggests whitewater rafting on the Nile and diving in the Red Sea, which are not available in Cairo. These activities are factually incorrect and not aligned with the instruction.",
"Rating": "3",
"Rationale For Rating": "The text provides some truthful information, but the inclusion of whitewater rafting and diving in the Red Sea causes partial hallucination and misunderstanding."
}
],
"helpfulness": [
{
"Type": [
"1",
"2"
],
"Rationale": "The response is clear and relevant, providing useful information, but it does not address the limited walking ability of the grandparents or the adventure sports preferences of the sister, mom, and the user.",
"Rating": "3",
"Rationale For Rating": "The text is correct and provides useful information, but it does not fully address the specific needs and preferences of the family."
}
]
}
},
{ // completion 2
...
},
{ // completion 3
...
},
{ // completion 4
...
}
]
},
```
## Dataset Example
Here we present an example of UltraFeedback
> **User:** I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports
> **Assisstant 1 (falcon-40b-instruct):** Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!
> **Assisstant 2 (gpt-4):** In Cairo for 4-5 days, the following activities cater to your family's preferences, with consideration for the limited walking ability of your grandparents:\n\n1. Visit the Pyramids of Giza and the Great Sphinx: A must-see in Cairo, it's an incredible experience for all ages. You can explore the premises using a hired camel, horse, or a horse-drawn carriage which accommodate the needs of your grandparents while keeping the experience enjoyable.\n\n2. Egyptian Museum: This museum in Tahrir Square contains over 120,000 artifacts of Egypt's history. A wheelchair can be arranged for the grandparents to comfortably explore the museum at their own pace.\n\n3. Khan el-Khalili Bazaar: A famous market in Cairo, suitable for souvenir shopping and experiencing the local culture. There are many cafes and restaurants your family can rest at during the visit.\n\n4. Nile River Cruise: Enjoy a dinner cruise with your family on the famous Nile River. It includes entertainment and a beautiful view of the city by night. Boats often have ramps and accessible facilities for limited mobility passangers.\n\n5. Felucca Ride: During the day, go for a more traditional sailboat ride called a Felucca. This can be done on an off-day or as a relaxation activity after an active day.\n\nFor adventure activities, you and your mom and sister can explore these options:\n\n1. Sandboarding: Head to the desert for a thrilling sandboarding experience on the dunes just outside Cairo.\n\n2. Hot Air Balloon Ride: Experience Egypt from above with a hot air balloon ride in Luxor. It would require a day trip from Cairo but is an incredible experience.\n\n3. Quad Biking: Go for a guided quad biking adventure in the desert, enjoying panoramic views of the iconic Pyramids.\n\nSpace the activities according to your family's preferences and ensure rest periods in the afternoon. Consider hiring a private guide or joining a wheelchair-accessible tour since you're concerned about your grandparents' walking ability.
> **Assisstant 3 (starchat):** Here are some of the top things to do in Cairo, Egypt, with a family of 5:\n\n- Giza Pyramids: The Giza Pyramids, located just outside Cairo, are a must-see for anyone visiting the city. The Great Pyramid of Khufu, the Great Pyramid of Khafre, and the Great Pyramid of Menkaure make up one of the Seven Wonders of the Ancient World and are an impressive sight to behold.\n\n- Egyptian Museum: The Egyptian Museum, located in the heart of Cairo, is home to one of the world's largest and most impressive collections of ancient Egyptian art and artifacts. Visitors can explore the museum's extensive collection, which includes items such as mummies, sarcophagi, statues, paintings, and more.\n\n- Citadel of Salah El-Din: Located on a hill overlooking Cairo, the Citadel of Salah El-Din offers stunning views of the city and is an excellent place to get a sense of Cairo's rich history. Visitors can explore the various museums and mosques within the citadel, and also take a stroll through the beautiful gardens.\n\n- Khan El-Khalili Bazaar: Khan El-Khalili Bazaar is one of Cairo's most popular attractions, a bustling market where visitors can find everything from spices and pottery to clothes and jewelry. It's a great place to do some shopping and explore the local culture.\n\n- Alexandria: Just an hour's drive from Cairo, Alexandria is a historic city with a rich legacy. Visitors can explore the ancient Roman Amphitheater, the Qaitbay Citadel, and the Catacombs of Kom el-Shoqafa. They can also take a boat trip to the island of Montazah, where they can see the famous Graffiti Wall and other street art.\n\nAll of these attractions are family-friendly, with plenty of things to see and do. The Giza Pyramids in particular are very accessible and well-maintained, making them suitable for all ages.
> **Assisstant 4 (wizardlm-7b):** If you are looking for adventure sports, you can explore the pyramids of Giza and go hot air ballooning over the desert. Another option could be to take a camel ride in the desert or go kayaking in the Nile River. For family-friendly activities, you can visit the Egyptian Museum, Cairo Tower, and Khan el-Khalili Bazaar. Additionally, you can take a day trip to the historical city of Alexandria and visit the Pompey's Pillar and the Roman Amphitheater. Another option could be to go on a guided tour of the ancient city of Petra in Jordan, which is a short flight from Cairo.
## Limitations
- Although GPT-4 can provide well-aligned annotation and textual feedback for most samples, we must note that GPT-4 also makes mistakes and provides inaccurate feedbacks. |
vitaliy-sharandin/climate-krakow-temp-monthly | 2023-09-27T11:57:15.000Z | [
"region:us"
] | vitaliy-sharandin | null | null | null | 0 | 71 | ---
dataset_info:
features:
- name: Absolute maximum temperature [°C]
dtype: float64
- name: Absolute minimum temperature [°C]
dtype: float64
- name: Average monthly temperature [°C]
dtype: float64
- name: dt
dtype: timestamp[ns]
splits:
- name: train
num_bytes: 27904
num_examples: 872
download_size: 17326
dataset_size: 27904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "climate-krakow-temp-monthly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yashnbx/l27b-E02-large-b10-1314-3 | 2023-09-30T16:29:18.000Z | [
"region:us"
] | yashnbx | null | null | null | 0 | 71 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: test
num_bytes: 1013014
num_examples: 146
- name: train
num_bytes: 9077266
num_examples: 1314
download_size: 1662927
dataset_size: 10090280
---
# Dataset Card for "l27b-E02-large-b10-1314-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
EduardoPacheco/wuerstchen-hugging-face-discord | 2023-10-11T00:15:36.000Z | [
"license:apache-2.0",
"region:us"
] | EduardoPacheco | null | null | null | 0 | 71 | ---
license: apache-2.0
dataset_info:
features:
- name: caption
dtype: string
- name: link
dtype: string
- name: message_id
dtype: string
- name: timestamp
dtype: string
splits:
- name: train
num_bytes: 272560
num_examples: 882
download_size: 137482
dataset_size: 272560
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cmu_hinglish_dog | 2023-03-17T10:14:14.000Z | [
"task_categories:translation",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:hi",
"license:cc-by-sa-3.0",
"license:gfdl",
... | null | This is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English only versions. Can be used for Translating between the two. | @inproceedings{cmu_dog_emnlp18,
title={A Dataset for Document Grounded Conversations},
author={Zhou, Kangyan and Prabhumoye, Shrimai and Black, Alan W},
year={2018},
booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing}
}
@inproceedings{khanuja-etal-2020-gluecos,
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
author = "Khanuja, Simran and
Dandapat, Sandipan and
Srinivasan, Anirudh and
Sitaram, Sunayana and
Choudhury, Monojit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
pages = "3575--3585"
} | null | 4 | 70 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
- hi
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- multilingual
- translation
pretty_name: CMU Document Grounded Conversations
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
dataset_info:
features:
- name: date
dtype: string
- name: docIdx
dtype: int64
- name: translation
dtype:
translation:
languages:
- en
- hi_en
- name: uid
dtype: string
- name: utcTimestamp
dtype: string
- name: rating
dtype: int64
- name: status
dtype: int64
- name: uid1LogInTime
dtype: string
- name: uid1LogOutTime
dtype: string
- name: uid1response
struct:
- name: response
sequence: int64
- name: type
dtype: string
- name: uid2response
struct:
- name: response
sequence: int64
- name: type
dtype: string
- name: user2_id
dtype: string
- name: whoSawDoc
sequence: string
- name: wikiDocumentIdx
dtype: int64
splits:
- name: train
num_bytes: 3142398
num_examples: 8060
- name: test
num_bytes: 379521
num_examples: 960
- name: validation
num_bytes: 368726
num_examples: 942
download_size: 8749685
dataset_size: 3890645
---
# Dataset Card for CMU Document Grounded Conversations
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CMU Hinglish DoG](http://festvox.org/cedar/data/notyet/)
- **Repository:** [CMU Document Grounded Conversations (English version)](https://github.com/festvox/datasets-CMU_DoG)
- **Paper:** [CMU Document Grounded Conversations (English version)](https://arxiv.org/pdf/1809.07358.pdf)
- **Point of Contact:**
### Dataset Summary
This is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English versions. Can be used for Translating between the two. The dataset has been provided by Prof. Alan Black's group from CMU.
### Supported Tasks and Leaderboards
- `abstractive-mt`
### Languages
## Dataset Structure
### Data Instances
A typical data point comprises a Hinglish text, with key `hi_en` and its English version with key `en`. The `docIdx` contains the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document. The `uid` has the user id of this utterance.
An example from the CMU_Hinglish_DoG train set looks as follows:
```
{'rating': 2,
'wikiDocumentIdx': 13,
'utcTimestamp': '2018-03-16T17:48:22.037Z',
'uid': 'user2',
'date': '2018-03-16T17:47:21.964Z',
'uid2response': {'response': [1, 2, 3, 5], 'type': 'finish'},
'uid1LogInTime': '2018-03-16T17:47:21.964Z',
'user2_id': 'USR664',
'uid1LogOutTime': '2018-03-16T18:02:29.072Z',
'whoSawDoc': ['user1', 'user2'],
'status': 1,
'docIdx': 0,
'uid1response': {'response': [1, 2, 3, 4], 'type': 'finish'},
'translation': {'en': 'The director is Zack Snyder, 27% Rotten Tomatoes, 4.9/10.',
'hi_en': 'Zack Snyder director hai, 27% Rotten Tomatoes, 4.9/10.'}}
```
### Data Fields
- `date`: the time the file is created, as a string
- `docIdx`: the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document.
- `translation`:
- `hi_en`: The text in Hinglish
- `en`: The text in English
- `uid`: the user id of this utterance.
- `utcTimestamp`: the server utc timestamp of this utterance, as a string
- `rating`: A number from 1 or 2 or 3. A larger number means the quality of the conversation is better.
- `status`: status as an integer
- `uid1LogInTime`: optional login time of user 1, as a string
- `uid1LogOutTime`: optional logout time of user 1, as a string
- `uid1response`: a json object contains the status and response of user after finishing the conversation. Fields in the object includes:
- `type`: should be one of ['finish', 'abandon','abandonWithouAnsweringFeedbackQuestion']. 'finish' means the user successfully finishes the conversation, either by completing 12 or 15 turns or in the way that the other user leaves the conversation first. 'abandon' means the user abandons the conversation in the middle, but entering the feedback page. 'abandonWithouAnsweringFeedbackQuestion' means the user just disconnects or closes the web page without providing the feedback.
- `response`: the answer to the post-conversation questions. The worker can choose multiple of them. The options presented to the user are as follows:
For type 'finish'
1: The conversation is understandable.
2: The other user is actively responding me.
3: The conversation goes smoothly.
For type 'abandon'
1: The other user is too rude.
2: I don't know how to proceed with the conversation.
3: The other user is not responding to me.
For users given the document
4: I have watched the movie before.
5: I have not watched the movie before.
For the users without the document
4: I will watch the movie after the other user's introduction.
5: I will not watch the movie after the other user's introduction.
- `uid2response`: same as uid1response
- `user2_id`: the generated user id of user 2
- `whoSawDoc`: Should be one of ['user1'], ['user2'], ['user1', 'user2']. Indicating which user read the document.
- `wikiDocumentId`: the index of the wiki document.
### Data Splits
| name |train|validation|test|
|----------|----:|---------:|---:|
|CMU DOG | 8060| 942| 960|
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
The Hinglish dataset is derived from the original CMU DoG (Document Grounded Conversations Dataset). More info about that can be found in the [repo](https://github.com/festvox/datasets-CMU_DoG)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was initially created by Prof Alan W Black's group at CMU
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@inproceedings{
cmu_dog_emnlp18,
title={A Dataset for Document Grounded Conversations},
author={Zhou, Kangyan and Prabhumoye, Shrimai and Black, Alan W},
year={2018},
booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing}
}
```
### Contributions
Thanks to [@Ishan-Kumar2](https://github.com/Ishan-Kumar2) for adding this dataset. |
ett | 2022-11-18T22:07:07.000Z | [
"task_categories:time-series-forecasting",
"task_ids:univariate-time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:... | null | The data of Electricity Transformers from two separated counties
in China collected for two years at hourly and 15-min frequencies.
Each data point consists of the target value "oil temperature" and
6 power load features. The train/val/test is 12/4/4 months. | @inproceedings{haoyietal-informer-2021,
author = {Haoyi Zhou and
Shanghang Zhang and
Jieqi Peng and
Shuai Zhang and
Jianxin Li and
Hui Xiong and
Wancai Zhang},
title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},
booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},
volume = {35},
number = {12},
pages = {11106--11115},
publisher = {{AAAI} Press},
year = {2021},
} | null | 3 | 70 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language: []
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Electricity Transformer Temperature
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- time-series-forecasting
task_ids:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
dataset_info:
- config_name: h1
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 241978
num_examples: 1
- name: test
num_bytes: 77508960
num_examples: 240
- name: validation
num_bytes: 33916080
num_examples: 120
download_size: 2589657
dataset_size: 111667018
- config_name: h2
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 241978
num_examples: 1
- name: test
num_bytes: 77508960
num_examples: 240
- name: validation
num_bytes: 33916080
num_examples: 120
download_size: 2417960
dataset_size: 111667018
- config_name: m1
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 967738
num_examples: 1
- name: test
num_bytes: 1239008640
num_examples: 960
- name: validation
num_bytes: 542089920
num_examples: 480
download_size: 10360719
dataset_size: 1782066298
- config_name: m2
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: feat_static_cat
sequence: uint64
- name: feat_dynamic_real
sequence:
sequence: float32
- name: item_id
dtype: string
splits:
- name: train
num_bytes: 967738
num_examples: 1
- name: test
num_bytes: 1239008640
num_examples: 960
- name: validation
num_bytes: 542089920
num_examples: 480
download_size: 9677236
dataset_size: 1782066298
---
# Dataset Card for [Electricity Transformer Temperature](https://github.com/zhouhaoyi/ETDataset)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Electricity Transformer Dataset](https://github.com/zhouhaoyi/ETDataset)
- **Repository:** https://github.com/zhouhaoyi/ETDataset
- **Paper:** [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436)
- **Point of Contact:** [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn)
### Dataset Summary
The electric power distribution problem is the distribution of electricity to different areas depending on its sequential usage. But predicting the future demand of a specific area is difficult, as it varies with weekdays, holidays, seasons, weather, temperatures, etc. However, no existing method can perform a long-term prediction based on super long-term real-world data with high precision. Any false predictions may damage the electrical transformer. So currently, without an efficient method to predict future electric usage, managers have to make decisions based on the empirical number, which is much higher than the real-world demands. It causes unnecessary waste of electric and equipment depreciation. On the other hand, the oil temperatures can reflect the condition of the Transformer. One of the most efficient strategies is to predict how the electrical transformers' oil temperature is safe and avoid unnecessary waste. As a result, to address this problem, the authors and Beijing Guowang Fuda Science & Technology Development Company have provided 2-years worth of data.
Specifically, the dataset combines short-term periodical patterns, long-term periodical patterns, long-term trends, and many irregular patterns. The dataset are obtained from 2 Electricity Transformers at 2 stations and come in an `1H` (hourly) or `15T` (15-minute) frequency containing 2 year * 365 days * 24 hours * (4 for 15T) times = 17,520 (70,080 for 15T) data points.
The target time series is the **O**il **T**emperature and the dataset comes with the following 6 covariates in the univariate setup:
* **H**igh **U**se**F**ul **L**oad
* **H**igh **U**se**L**ess **L**oad
* **M**iddle **U**se**F**ul **L**oad
* **M**iddle **U**se**L**ess **L**oad
* **L**ow **U**se**F**ul **L**oad
* **L**ow **U**se**L**ess **L**oad
### Dataset Usage
To load a particular variant of the dataset just specify its name e.g:
```python
load_dataset("ett", "m1", multivariate=False) # univariate 15-min frequency dataset from first transformer
```
or to specify a prediction length:
```python
load_dataset("ett", "h2", prediction_length=48) # multivariate dataset from second transformer with prediction length of 48 (hours)
```
### Supported Tasks and Leaderboards
The time series data is split into train/val/test set of 12/4/4 months respectively. Given the prediction length (default: 1 day (24 hours or 24*4 15T)) we create rolling windows of this size for the val/test sets.
#### `time-series-forecasting`
##### `univariate-time-series-forecasting`
The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split. The covriates are stored in the `feat_dynamic_real` key of each time series.
##### `multivariate-time-series-forecasting`
The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split.
### Languages
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'start': datetime.datetime(2012, 1, 1, 0, 0),
'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...],
'feat_static_cat': [0],
'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...],
'item_id': 'OT'
}
```
### Data Fields
For the univariate regular time series each series has the following keys:
* `start`: a datetime of the first entry of each time series in the dataset
* `target`: an array[float32] of the actual target values
* `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset
* `feat_dynamic_real`: optional array of covariate features
* `item_id`: a string identifier of each time series in a dataset for reference
For the multivariate time series the `target` is a vector of the multivariate dimension for each time point.
### Data Splits
The time series data is split into train/val/test set of 12/4/4 months respectively.
## Dataset Creation
### Curation Rationale
Develop time series methods that can perform a long-term prediction based on super long-term real-world data with high precision.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
* [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn)
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```tex
@inproceedings{haoyietal-informer-2021,
author = {Haoyi Zhou and
Shanghang Zhang and
Jieqi Peng and
Shuai Zhang and
Jianxin Li and
Hui Xiong and
Wancai Zhang},
title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},
booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},
volume = {35},
number = {12},
pages = {11106--11115},
publisher = {{AAAI} Press},
year = {2021},
}
```
### Contributions
Thanks to [@kashif](https://github.com/kashif) for adding this dataset. |
s-nlp/paradetox | 2023-09-08T08:59:53.000Z | [
"task_categories:text-generation",
"language:en",
"license:openrail++",
"region:us"
] | s-nlp | null | null | null | 7 | 70 | ---
license: openrail++
task_categories:
- text-generation
language:
- en
---
# ParaDetox: Detoxification with Parallel Data (English)
This repository contains information about Paradetox dataset -- the first parallel corpus for the detoxification task -- as well as models and evaluation methodology for the detoxification of English texts. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
## ParaDetox Collection Pipeline
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
* *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
* *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
* *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
All these steps were done to ensure high quality of the data and make the process of collection automated. For more details please refer to the original paper.
## ParaDetox Dataset
As a result, we get paraphrases for 11,939 toxic sentences (on average 1.66 paraphrases per sentence), 19,766 paraphrases total.
In addition to all ParaDetox dataset, we also make public [samples](https://huggingface.co/datasets/s-nlp/en_non_detoxified) that were marked by annotators as "cannot rewrite" in *Task 1* of crowdsource pipeline.
# Detoxification evaluation
The automatic evaluation of the model were produced based on three parameters:
* *style transfer accuracy* (**STA**): percentage of nontoxic outputs identified by a style classifier. We pretrained toxicity classifier on Jigsaw data and put it online in HuggingFace🤗 [repo](https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier).
* *content preservation* (**SIM**): cosine similarity between the embeddings of the original text and the output computed with the model of [Wieting et al. (2019)](https://aclanthology.org/P19-1427/).
* *fluency* (**FL**): percentage of fluent sentences identified by a RoBERTa-based classifier of linguistic acceptability trained on the [CoLA dataset](https://nyu-mll.github.io/CoLA/).
All code used for our experiments to evluate different detoxifcation models can be run via Colab notebook [](https://colab.research.google.com/drive/1xTqbx7IPF8bVL2bDCfQSDarA43mIPefE?usp=sharing)
## Detoxification model
**New SOTA** for detoxification task -- BART (base) model trained on ParaDetox dataset -- we released online in HuggingFace🤗 repository [here](https://huggingface.co/SkolkovoInstitute/bart-base-detox).
You can also check out our [demo](https://detoxifier.nlp.zhores.net/junction/) and telegram [bot](https://t.me/rudetoxifierbot).
## Citation
```
@inproceedings{logacheva-etal-2022-paradetox,
title = "{P}ara{D}etox: Detoxification with Parallel Data",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Ustyantsev, Sergey and
Moskovskiy, Daniil and
Dale, David and
Krotova, Irina and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.469",
pages = "6804--6818",
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
}
```
## Contacts
If you find some issue, do not hesitate to add it to [Github Issues](https://github.com/skoltech-nlp/paradetox/issues).
For any questions and get the TEST SET, please, contact: Daryna Dementieva (dardem96@gmail.com) |
ceyda/fashion-products-small | 2022-07-21T08:24:03.000Z | [
"region:us"
] | ceyda | null | null | null | 4 | 70 | For test purposes!
Preprocessed version of https://www.kaggle.com/datasets/paramaggarwal/fashion-product-images-dataset
Images resized to have max 512 |
KevinSpaghetti/cadec | 2022-10-06T13:09:46.000Z | [
"region:us"
] | KevinSpaghetti | null | null | null | 1 | 70 | Entry not found |
bigbio/linnaeus | 2022-12-22T15:44:50.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | bigbio | Linnaeus is a novel corpus of full-text documents manually annotated for species mentions. | @Article{gerner2010linnaeus,
title={LINNAEUS: a species name identification system for biomedical literature},
author={Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
journal={BMC bioinformatics},
volume={11},
number={1},
pages={1--17},
year={2010},
publisher={BioMed Central}
} | null | 0 | 70 |
---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: LINNAEUS
homepage: http://linnaeus.sourceforge.net/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for LINNAEUS
## Dataset Description
- **Homepage:** http://linnaeus.sourceforge.net/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
Linnaeus is a novel corpus of full-text documents manually annotated for species mentions.
## Citation Information
```
@Article{gerner2010linnaeus,
title={LINNAEUS: a species name identification system for biomedical literature},
author={Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
journal={BMC bioinformatics},
volume={11},
number={1},
pages={1--17},
year={2010},
publisher={BioMed Central}
}
```
|
Kaludi/Customer-Support-Responses | 2023-03-27T23:11:45.000Z | [
"region:us"
] | Kaludi | null | null | null | 1 | 70 | Entry not found |
mstz/madelon | 2023-04-16T17:34:04.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"madelon",
"tabular_classification",
"UCI",
"region:us"
] | mstz | null | null | null | 0 | 70 | ---
language:
- en
tags:
- madelon
- tabular_classification
- UCI
pretty_name: Madelon
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- Madelon
license: cc
---
# Annealing
The [Madelon dataset](https://archive-beta.ics.uci.edu/dataset/171/madelon) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Artificial dataset with continuous input variables.
Highly non-linear classification problem.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| madelon | Binary classification | |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/madelon")["train"]
``` |
azcorpus/azcorpus_v0 | 2023-09-20T10:24:11.000Z | [
"license:openrail",
"region:us"
] | azcorpus | null | null | null | 13 | 70 | ---
extra_gated_prompt: "You agree to not use the dataset to conduct experiments that cause harm to human subjects."
extra_gated_fields:
Name and Surname: text
Email: text
Company: text
Purpose of Use: text
I agree to use this dataset for non-commercial use ONLY: checkbox
license: openrail
---

# azcorpus - The largest open-source NLP corpus for Azerbaijani (1.9M documents, ~ 18M sentences)
__Due to ongoing maintenance activities, only a portion of our corpus is currently available for access.__
In recent years, deep learning models have been widely used in NLP, yielding excellent results. However, most research works in NLP have focused on high-resource languages such as English. There is a significant gap in NLP research for low- resource languages, Azerbaijani being no exception. So, the availability of adequate corpora for most of the languages is still limited, especially for less-resourced languages such as Azerbaijani.
Therefore, this study aimed to contribute to the NLP research community by building the largest NLP corpus for Azerbaijani language.
## Corpus Summary
“azcorpus” built for text generation purposes contains a total of 1.9 million documents, drawn from a variety of sources. The corpus is designed to provide a broad range of linguistic data for natural language processing and organized by genre and topic, with texts covering a range of subjects including politics, economics, science, culture, sport, history, society and etc.
Texts were selected from a variety of sources including newspapers, magazines, academic journals, wikipedia articles and books. The corpus includes both contemporary and historical texts, providing a rich linguistic and cultural context for natural language processing applications.
___
## Corpus structure
### Data fields
- id: Document id
- text - Newline-separated content
- source - Document source
- reliability - Subjective cleaning evaluation rate
- license - Document license
### Data Splits
This corpus has 3 sources(az_books, az_wiki, and az_news) and 1.876.492 cleaned documents.
| Source name | Number of Instances | Size (GB) |
| ------------- | --------------------|:----------------------|
| az_books | 1,540,732 | 19.5 |
| az_wiki | 98,882 | 0.9 |
| az_news | 236,878 | 3.8 |
___
## Methodology
The first step in building "azcorpus" was to collect text data from various sources.
The news websites were selected based on their popularity and the diversity of topics covered.
Additionally, a collection of ebooks in Azerbaijani was obtained from various online sources. We have expanded our collection to encompass not only fictional literature, but also scholarly works, such as physics, chemistry, and etc.
Source-specific cleaning techniques were applied separately to ensure consistency and accuracy in the corpus. Further information regarding the methodology at hand will be expounded upon in our forthcoming academic paper.
To ensure the ethical use of the corpus, we only collected publicly available data, and we did not collect any personal or sensitive information. We also ensured that the corpus was used for research purposes only and not for commercial gain. In accordance with legal considerations, it is not within our current plans to divulge sources at this time.
___
## Corpus Usage
To obtain comprehensive guidance on how to use "azcorpus", please refer to the detailed usage instructions provided in this [notebook](https://github.com/azcorpus/azcorpus_v0/blob/main/azcorpus_v0.ipynb).
```python
corpus = AzCorpus(access_token = "your_token")
# To obtain a corpus in the raw JSON format
corpus.generate_samples()
```
The download of the entire corpus is a process that entails a time span of approximately 25 minutes to 2 hours, contingent upon the velocity of your internet connection. Presently, our team is engrossed in the refinement of the download script with the objective of enhancing efficiency.
___
## Considerations for Using the Corpus
#### Social Impact
Our work has the potential to contribute to the community by providing a valuable resource for development of new text generation tools in Azerbaijani.
"azcorpus" demonstrates the importance of building large NLP corpora for under-resourced languages, and highlights the social impact of such resources. By making this corpus available to the wider community, we hope to stimulate further research and development in the field of Azerbaijani text generation, and contribute to the broader goal of promoting linguistic diversity and cultural heritage. Future studies could explore the potential community impact of our work.
#### Biases and Limitations
Addressing potential bias in machine learning corpuses is a common concern in research.
In this study, we acknowledge that our dataset may be subject to bias and to mitigate this issue, we employed several techniques.
However, we recognize that our approach may still have limitations.
So, It is important to exercise caution with models trained on a "azcorpus" that has not been adequately filtered,
as this may have an impact on the resulting models. In particular, it is crucial to be mindful of any biases
that may be present in the "azcorpus_v0".
Future work could further investigate these issues and explore additional
methods to address bias in the corpus.
___
## Additional Information
#### Corpus authors
The corpus was put together by [Huseyn Kishiyev](https://www.linkedin.com/in/huseynkishiyev/), [Jafar Isbarov](https://www.linkedin.com/in/jafar-isbarov/), [Kanan Suleymanli](https://www.linkedin.com/in/kanan-suleyman/), [Khazar Heydarli](https://www.linkedin.com/in/xezer-heyderli/), [Leyla Eminova](https://www.linkedin.com/in/leyla-eminova/) and [Nijat Zeynalov](https://www.linkedin.com/in/nijat-zeynalov-064163142/).
The authors' names have been arranged in alphabetical order. All authors have equal rights and contributed equally to this work.
The authors declare no conflict of interest. There are no founding sponsors and no other role in the design of the work other than the authors; in the collection, analysis, or interpretation of data; in the writing of the manuscript, and in the decision to publish the corpus.
___ |
mstz/pima | 2023-04-16T17:57:48.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"pima",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | null | null | 0 | 70 | ---
language:
- en
tags:
- pima
- tabular_classification
- binary_classification
- UCI
pretty_name: Ozone
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- pima
license: cc
---
# pima
The [pima dataset](https://archive.ics.uci.edu/ml/datasets/Ozone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Predict diabetes of a patient.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| pima | Binary classification | Does the patient have diabetes?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/pima")["train"]
``` |
mstz/planning | 2023-04-16T17:57:54.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"planning",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_planning_relax_230,
author = {Bhatt,Rajen},
title = {{Planning Relax}},
year = {2012},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5T023}}
} | null | 0 | 70 | ---
language:
- en
tags:
- planning
- tabular_classification
- binary_classification
- UCI
pretty_name: Planning
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- planning
license: cc
---
# Planning
The [Planning dataset](https://archive.ics.uci.edu/ml/datasets/Planning) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------|
| planning | Binary classification | Is the patient in a planning state?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/planning")["train"]
``` |
mstz/spambase | 2023-04-16T18:02:22.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"spambase",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_spambase_94,
author = {Hopkins,Mark, Reeber,Erik, Forman,George & Suermondt,Jaap},
title = {{Spambase}},
year = {1999},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C53G6X}}
} | null | 0 | 70 | ---
language:
- en
tags:
- spambase
- tabular_classification
- binary_classification
- UCI
pretty_name: Spambase
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- spambase
license: cc
---
# Spambase
The [Spambase dataset](https://archive.ics.uci.edu/ml/datasets/Spambase) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Is the given mail spam?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------|
| spambase | Binary classification | Is the mail spam?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/spambase")["train"]
``` |
mstz/vertebral_column | 2023-04-16T18:03:50.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"vertebral_column",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_vertebral_column_212,
author = {Barreto,Guilherme & Neto,Ajalmar},
title = {{Vertebral Column}},
year = {2011},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5K89B}}
} | null | 0 | 70 | ---
language:
- en
tags:
- vertebral_column
- tabular_classification
- binary_classification
- UCI
pretty_name: Vertebral Column
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- vertebral
license: cc
---
# Vertebral Column
The [Vertebral Column dataset](https://archive.ics.uci.edu/ml/datasets/vertebral+column) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| abnormal | Binary classification | Is the spine abnormal?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/vertebral_column")["train"]
``` |
mstz/page_blocks | 2023-04-16T17:57:31.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"page_blocks",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"region:us"
] | mstz | null | @misc{misc_page_blocks_classification_78,
author = {Malerba,Donato},
title = {{Page Blocks Classification}},
year = {1995},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5J590}}
} | null | 0 | 70 | ---
language:
- en
tags:
- page_blocks
- tabular_classification
- binary_classification
- multiclass_classification
pretty_name: Page Blocks
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- page_blocks
- page_blocks_binary
license: cc
---
# PageBlocks
The [PageBlocks dataset](https://archive-beta.ics.uci.edu/dataset/76/page_blocks) from the [UCI repository](https://archive-beta.ics.uci.edu/).
How many transitions does the page block have?
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| page_blocks | Multiclass classification |
| page_blocks_binary| Binary classification | |
mstz/sydt | 2023-04-18T08:27:15.000Z | [
"task_categories:tabular-classification",
"language:en",
"sydt",
"tabular_classification",
"binary_classification",
"synthetic",
"region:us"
] | mstz | null | null | null | 0 | 70 | ---
language:
- en
tags:
- sydt
- tabular_classification
- binary_classification
- synthetic
pretty_name: Sydt
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- sydt
---
# Sydt
Synthetic dataset. |
edarchimbaud/earnings-estimate-stocks | 2023-10-07T23:13:59.000Z | [
"region:us"
] | edarchimbaud | null | null | null | 1 | 70 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: current_qtr
dtype: string
- name: no_of_analysts_current_qtr
dtype: int64
- name: next_qtr
dtype: string
- name: no_of_analysts_next_qtr
dtype: int64
- name: current_year
dtype: int64
- name: no_of_analysts_current_year
dtype: int64
- name: next_year
dtype: int64
- name: no_of_analysts_next_year
dtype: int64
- name: avg_estimate_current_qtr
dtype: float64
- name: avg_estimate_next_qtr
dtype: float64
- name: avg_estimate_current_year
dtype: float64
- name: avg_estimate_next_year
dtype: float64
- name: low_estimate_current_qtr
dtype: float64
- name: low_estimate_next_qtr
dtype: float64
- name: low_estimate_current_year
dtype: float64
- name: low_estimate_next_year
dtype: float64
- name: high_estimate_current_qtr
dtype: float64
- name: high_estimate_next_qtr
dtype: float64
- name: high_estimate_current_year
dtype: float64
- name: high_estimate_next_year
dtype: float64
- name: year_ago_eps_current_qtr
dtype: float64
- name: year_ago_eps_next_qtr
dtype: float64
- name: year_ago_eps_current_year
dtype: float64
- name: year_ago_eps_next_year
dtype: float64
splits:
- name: train
num_bytes: 4921663
num_examples: 22201
download_size: 626368
dataset_size: 4921663
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "earnings-estimate-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
The earnings-estimate-sp500 dataset provides earnings estimate data for companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used to analyze earnings estimates for systematic trading or financial analysis tasks. The dataset does not specify any associated leaderboards.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
The dataset contains the following fields:
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): The date associated with the earnings estimate data.
- current_qtr (string): The current quarter.
- no_of_analysts_current_qtr (int64): The number of analysts providing estimates for the current quarter.
- next_qtr (string): The next quarter.
- no_of_analysts_next_qtr (int64): The number of analysts providing estimates for the next quarter.
- current_year (int64): The current year.
- no_of_analysts_current_year (int64): The number of analysts providing estimates for the current year.
- next_year (int64): The next year.
- no_of_analysts_next_year (int64): The number of analysts providing estimates for the next year.
- avg_estimate_current_qtr (float64): The average estimate for the current quarter.
- avg_estimate_next_qtr (float64): The average estimate for the next quarter.
- avg_estimate_current_year (float64): The average estimate for the current year.
- avg_estimate_next_year (float64): The average estimate for the next year.
- low_estimate_current_qtr (float64): The low estimate for the current quarter.
- low_estimate_next_qtr (float64): The low estimate for the next quarter.
- low_estimate_current_year (float64): The low estimate for the current year.
- low_estimate_next_year (float64): The low estimate for the next year.
- high_estimate_current_qtr (float64): The high estimate for the current quarter.
- high_estimate_next_qtr (float64): The high estimate for the next quarter.
- high_estimate_current_year (float64): The high estimate for the current year.
- high_estimate_next_year (float64): The high estimate for the next year.
- year_ago_eps_current_qtr (float64): The earnings per share (EPS) for the current quarter a year ago.
- year_ago_eps_next_qtr (float64): The earnings per share (EPS) for the next quarter a year ago.
- year_ago_eps_current_year (float64): The earnings per share (EPS) for the current year a year ago.
- year_ago_eps_next_year (float64): The earnings per share (EPS) for the next year a year ago.
### Data Splits
The dataset consists of a single split, called "train."
## Additional Information
### Dataset Curators
This dataset does not specify any specific curators.
### Licensing Information
The earnings-estimate-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, earnings-estimate-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
edarchimbaud/eps-revisions-stocks | 2023-10-07T23:14:38.000Z | [
"region:us"
] | edarchimbaud | null | null | null | 0 | 70 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: current_qtr
dtype: string
- name: up_last_7_days_current_qtr
dtype: float64
- name: next_qtr
dtype: string
- name: up_last_7_days_next_qtr
dtype: float64
- name: current_year
dtype: int64
- name: up_last_7_days_current_year
dtype: float64
- name: next_year
dtype: int64
- name: up_last_7_days_next_year
dtype: float64
- name: up_last_30_days_current_qtr
dtype: float64
- name: up_last_30_days_next_qtr
dtype: float64
- name: up_last_30_days_current_year
dtype: float64
- name: up_last_30_days_next_year
dtype: float64
- name: down_last_7_days_current_qtr
dtype: 'null'
- name: down_last_7_days_next_qtr
dtype: 'null'
- name: down_last_7_days_current_year
dtype: 'null'
- name: down_last_7_days_next_year
dtype: 'null'
- name: down_last_30_days_current_qtr
dtype: float64
- name: down_last_30_days_next_qtr
dtype: float64
- name: down_last_30_days_current_year
dtype: float64
- name: down_last_30_days_next_year
dtype: float64
splits:
- name: train
num_bytes: 3208211
num_examples: 20217
download_size: 262559
dataset_size: 3208211
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eps-revisions-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The eps-revisions-sp500 dataset provides information on earnings-per-share (EPS) revisions for companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used to analyze EPS revisions and their impact on the performance of companies in the S&P 500 index. It does not specify any particular leaderboard or evaluation metric.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string indicating the date of the recorded data.
- current_qtr (string): A string representing the current quarter.
- up_last_7_days_current_qtr (int64): An integer indicating the number of days the EPS has increased in the current quarter.
- next_qtr (string): A string representing the next quarter.
- up_last_7_days_next_qtr (int64): An integer indicating the number of days the EPS is projected to increase in the next quarter.
- current_year (int64): An integer representing the current year.
- up_last_7_days_current_year (int64): An integer indicating the number of days the EPS has increased in the current year.
- next_year (int64): An integer representing the next year.
- up_last_7_days_next_year (int64): An integer indicating the number of days the EPS is projected to increase in the next year.
- up_last_30_days_current_qtr (int64): An integer indicating the number of days the EPS has increased in the current quarter over the last 30 days.
- up_last_30_days_next_qtr (int64): An integer indicating the number of days the EPS is projected to increase in the next quarter over the last 30 days.
- up_last_30_days_current_year (int64): An integer indicating the number of days the EPS has increased in the current year over the last 30 days.
- up_last_30_days_next_year (int64): An integer indicating the number of days the EPS is projected to increase in the next year over the last 30 days.
- down_last_7_days_current_qtr (null): A null value indicating the absence of data on EPS decrease in the current quarter.
- down_last_7_days_next_qtr (null): A null value indicating the absence of data on EPS decrease in the next quarter.
- down_last_7_days_current_year (null): A null value indicating the absence of data on EPS decrease in the current year.
- down_last_7_days_next_year (null): A null value indicating the absence of data on EPS decrease in the next year.
- down_last_30_days_current_qtr (int64): An integer indicating the number of days the EPS has decreased in the current quarter over the last 30 days.
- down_last_30_days_next_qtr (int64): An integer indicating the number of days the EPS is projected to decrease in the next quarter over the last 30 days.
- down_last_30_days_current_year (int64): An integer indicating the number of days the EPS has decreased in the current year over the last 30 days.
- down_last_30_days_next_year (int64): An integer indicating the number of days the EPS is projected to decrease in the next year over the last 30 days.
### Data Splits
A single split, called train.
## Dataset Creation
### Curation Rationale
The eps-revisions-sp500 dataset was created to provide information on EPS revisions for companies in the S&P 500 index.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from reliable sources and normalized for consistency.
### Annotations
#### Annotation Process
[N/A]
#### Annotators
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The eps-revisions-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The eps-revisions-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, eps-revisions-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
edarchimbaud/eps-trend-stocks | 2023-10-07T23:14:50.000Z | [
"region:us"
] | edarchimbaud | null | null | null | 1 | 70 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: current_qtr
dtype: string
- name: current_estimate_current_qtr
dtype: float64
- name: next_qtr
dtype: string
- name: current_estimate_next_qtr
dtype: float64
- name: current_year
dtype: int64
- name: current_estimate_current_year
dtype: float64
- name: next_year
dtype: int64
- name: current_estimate_next_year
dtype: float64
- name: 7_days_ago_current_qtr
dtype: float64
- name: 7_days_ago_next_qtr
dtype: float64
- name: 7_days_ago_current_year
dtype: float64
- name: 7_days_ago_next_year
dtype: float64
- name: 30_days_ago_current_qtr
dtype: float64
- name: 30_days_ago_next_qtr
dtype: float64
- name: 30_days_ago_current_year
dtype: float64
- name: 30_days_ago_next_year
dtype: float64
- name: 60_days_ago_current_qtr
dtype: float64
- name: 60_days_ago_next_qtr
dtype: float64
- name: 60_days_ago_current_year
dtype: float64
- name: 60_days_ago_next_year
dtype: float64
- name: 90_days_ago_current_qtr
dtype: float64
- name: 90_days_ago_next_qtr
dtype: float64
- name: 90_days_ago_current_year
dtype: float64
- name: 90_days_ago_next_year
dtype: float64
splits:
- name: train
num_bytes: 4468878
num_examples: 20204
download_size: 788691
dataset_size: 4468878
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eps-trend-sp500"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
The "eps-trend-sp500" dataset contains earnings per share (EPS) trend data for companies in the S&P 500 index. It includes information about the EPS estimates for the current quarter, next quarter, current year, and next year, as well as estimates from 7 days ago, 30 days ago, 60 days ago, and 90 days ago.
### Supported Tasks and Leaderboards
The dataset can be used to analyze EPS trends and perform financial analysis tasks. It does not specify any associated leaderboards.
### Languages
The dataset does not specify any specific language.
## Dataset Structure
### Data Instances
The dataset consists of multiple data instances, where each instance represents the EPS trend data for a specific company and date.
### Data Fields
The dataset contains the following fields:
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): The date associated with the EPS trend data.
- current_qtr (string): The current quarter.
- current_estimate_current_qtr (float64): The current estimate for the EPS in the current quarter.
- next_qtr (string): The next quarter.
- current_estimate_next_qtr (float64): The current estimate for the EPS in the next quarter.
- current_year (int64): The current year.
- current_estimate_current_year (float64): The current estimate for the EPS in the current year.
- next_year (int64): The next year.
- current_estimate_next_year (float64): The current estimate for the EPS in the next year.
- 7_days_ago_current_qtr (float64): The EPS estimate for the current quarter from 7 days ago.
- 7_days_ago_next_qtr (float64): The EPS estimate for the next quarter from 7 days ago.
- 7_days_ago_current_year (float64): The EPS estimate for the current year from 7 days ago.
- 7_days_ago_next_year (float64): The EPS estimate for the next year from 7 days ago.
- 30_days_ago_current_qtr (float64): The EPS estimate for the current quarter from 30 days ago.
- 30_days_ago_next_qtr (float64): The EPS estimate for the next quarter from 30 days ago.
- 30_days_ago_current_year (float64): The EPS estimate for the current year from 30 days ago.
- 30_days_ago_next_year (float64): The EPS estimate for the next year from 30 days ago.
- 60_days_ago_current_qtr (float64): The EPS estimate for the current quarter from 60 days ago.
- 60_days_ago_next_qtr (float64): The EPS estimate for the next quarter from 60 days ago.
- 60_days_ago_current_year (float64): The EPS estimate for the current year from 60 days ago.
- 60_days_ago_next_year (float64): The EPS estimate for the next year from 60 days ago.
- 90_days_ago_current_qtr (float64): The EPS estimate for the current quarter from 90 days ago.
- 90_days_ago_next_qtr (float64): The EPS estimate for the next quarter from 90 days ago.
- 90_days_ago_current_year (float64): The EPS estimate for the current year from 90 days ago.
- 90_days_ago_next_year (float64): The EPS estimate for the next year from 90 days ago.
### Data Splits
The dataset consists of a single split, called "train."
## Additional Information
### Dataset Curators
The eps-trend-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The eps-trend-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, eps-trend-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
edarchimbaud/earnings-surprise-stocks | 2023-10-07T23:14:27.000Z | [
"region:us"
] | edarchimbaud | null | null | null | 1 | 70 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: id
dtype: int64
- name: fiscal_qtr_end
dtype: string
- name: date_reported
dtype: timestamp[ns]
- name: eps
dtype: float64
- name: consensus_forecast
dtype: string
- name: percentage_surprise
dtype: string
splits:
- name: train
num_bytes: 5574479
num_examples: 76015
download_size: 392666
dataset_size: 5574479
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
<!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>502</h1>
<p>Bad Gateway</p>
</div>
</main>
</body>
</html> |
LeoLM/MMLU_de | 2023-06-15T01:41:53.000Z | [
"license:mit",
"region:us"
] | LeoLM | null | null | null | 0 | 70 | ---
license: mit
---
# Massive Multitask Language Understanding (MMLU) in German
This dataset is to be used for the evaluation of LLM German language understanding.
It is based on the hendrycksTest dataset ([here](https://huggingface.co/datasets/cais/mmlu) and [here](https://huggingface.co/datasets/tasksource/mmlu)) and was created
by using the GPT-3.5 API to translate the entire test set and a few examples of the validation set. To make sure the answer options follow the intended sentence structure
and are always of the correct format, GPT was prompted to output in a JSON format. This came with some complications that were later manually fixed.
The prompt used to translate a single example was the following:
```
insert prompt here @TODO
```
This translation cost a total of ~13€ including iterating on the prompt and fixing broken examples.
|
HausaNLP/NaijaSenti-Twitter | 2023-06-16T16:42:04.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"multilinguality:monolingual",
"multilinguality:multilingual",
"size_categor... | HausaNLP | NaijaSenti is the first large-scale human-annotated Twitter sentiment dataset for the four most widely spoken languages in Nigeria — Hausa, Igbo, Nigerian-Pidgin, and Yorùbá — consisting of around 30,000 annotated tweets per language, including a significant fraction of code-mixed tweets. | @inproceedings{muhammad-etal-2022-naijasenti,
title = "{N}aija{S}enti: A {N}igerian {T}witter Sentiment Corpus for Multilingual Sentiment Analysis",
author = "Muhammad, Shamsuddeen Hassan and
Adelani, David Ifeoluwa and
Ruder, Sebastian and
Ahmad, Ibrahim Sa{'}id and
Abdulmumin, Idris and
Bello, Bello Shehu and
Choudhury, Monojit and
Emezue, Chris Chinenye and
Abdullahi, Saheed Salahudeen and
Aremu, Anuoluwapo and
Jorge, Al{\'\i}pio and
Brazdil, Pavel",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.63",
pages = "590--602",
} | null | 0 | 70 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-classification
- sentiment-scoring
- semantic-similarity-classification
- semantic-similarity-scoring
tags:
- sentiment analysis, Twitter, tweets
- sentiment
multilinguality:
- monolingual
- multilingual
size_categories:
- 100K<n<1M
language:
- hau
- ibo
- pcm
- yor
pretty_name: NaijaSenti
---
<p align="center">
<img src="https://raw.githubusercontent.com/hausanlp/NaijaSenti/main/image/naijasenti_logo1.png", width="500">
--------------------------------------------------------------------------------
## Dataset Description
- **Homepage:** https://github.com/hausanlp/NaijaSenti
- **Repository:** [GitHub](https://github.com/hausanlp/NaijaSenti)
- **Paper:** [NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis](https://aclanthology.org/2022.lrec-1.63/)
- **Leaderboard:** N/A
- **Point of Contact:** [Shamsuddeen Hassan Muhammad](shamsuddeen2004@gmail.com)
### Dataset Summary
NaijaSenti is the first large-scale human-annotated Twitter sentiment dataset for the four most widely spoken languages in Nigeria — Hausa, Igbo, Nigerian-Pidgin, and Yorùbá — consisting of around 30,000 annotated tweets per language, including a significant fraction of code-mixed tweets.
### Supported Tasks and Leaderboards
The NaijaSenti can be used for a wide range of sentiment analysis tasks in Nigerian languages, such as sentiment classification, sentiment intensity analysis, and emotion detection. This dataset is suitable for training and evaluating machine learning models for various NLP tasks related to sentiment analysis in African languages. It was part of the datasets that were used for [SemEval 2023 Task 12: Sentiment Analysis for African Languages](https://codalab.lisn.upsaclay.fr/competitions/7320).
### Languages
4 most spoken Nigerian languages
* Hausa (hau)
* Igbo (ibo)
* Nigerian Pidgin (pcm)
* Yoruba (yor)
## Dataset Structure
### Data Instances
For each instance, there is a string for the tweet and a string for the label. See the NaijaSenti [dataset viewer](https://huggingface.co/datasets/HausaNLP/NaijaSenti-Twitter/viewer/hau/train) to explore more examples.
```
{
"tweet": "string",
"label": "string"
}
```
### Data Fields
The data fields are:
```
tweet: a string feature.
label: a classification label, with possible values including positive, negative and neutral.
```
### Data Splits
The NaijaSenti dataset has 3 splits: train, validation, and test. Below are the statistics for Version 1.0.0 of the dataset.
| | hau | ibo | pcm | yor |
|---|---|---|---|---|
| train | 14,172 | 10,192 | 5,121 | 8,522 |
| dev | 2,677 | 1,841 | 1,281 | 2,090 |
| test | 5,303 | 3,682 | 4,154 | 4,515 |
| total | 22,152 | 15,715 | 10,556 | 15,127 |
### How to use it
```python
from datasets import load_dataset
# you can load specific languages (e.g., Hausa). This download train, validation and test sets.
ds = load_dataset("HausaNLP/NaijaSenti-Twitter", "hau")
# train set only
ds = load_dataset("HausaNLP/NaijaSenti-Twitter", "hau", split = "train")
# test set only
ds = load_dataset("HausaNLP/NaijaSenti-Twitter", "hau", split = "test")
# validation set only
ds = load_dataset("HausaNLP/NaijaSenti-Twitter", "hau", split = "validation")
```
## Dataset Creation
### Curation Rationale
NaijaSenti Version 1.0.0 aimed to be used sentiment analysis and other related task in Nigerian indigenous and creole languages - Hausa, Igbo, Nigerian Pidgin and Yoruba.
### Source Data
Twitter
### Personal and Sensitive Information
We anonymized the tweets by replacing all *@mentions* by *@user* and removed all URLs.
## Considerations for Using the Data
### Social Impact of Dataset
The NaijaSenti dataset has the potential to improve sentiment analysis for Nigerian languages, which is essential for understanding and analyzing the diverse perspectives of people in Nigeria. This dataset can enable researchers and developers to create sentiment analysis models that are specific to Nigerian languages, which can be used to gain insights into the social, cultural, and political views of people in Nigerian. Furthermore, this dataset can help address the issue of underrepresentation of Nigerian languages in natural language processing, paving the way for more equitable and inclusive AI technologies.
## Additional Information
### Dataset Curators
* Shamsuddeen Hassan Muhammad
* Idris Abdulmumin
* Ibrahim Said Ahmad
* Bello Shehu Bello
### Licensing Information
This NaijaSenti is licensed under a Creative Commons Attribution BY-NC-SA 4.0 International License
### Citation Information
```
@inproceedings{muhammad-etal-2022-naijasenti,
title = "{N}aija{S}enti: A {N}igerian {T}witter Sentiment Corpus for Multilingual Sentiment Analysis",
author = "Muhammad, Shamsuddeen Hassan and
Adelani, David Ifeoluwa and
Ruder, Sebastian and
Ahmad, Ibrahim Sa{'}id and
Abdulmumin, Idris and
Bello, Bello Shehu and
Choudhury, Monojit and
Emezue, Chris Chinenye and
Abdullahi, Saheed Salahudeen and
Aremu, Anuoluwapo and
Jorge, Al{\'\i}pio and
Brazdil, Pavel",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.63",
pages = "590--602",
}
```
### Contributions
> This work was carried out with support from Lacuna Fund, an initiative co-founded by The Rockefeller Foundation, Google.org, and Canada’s International Development Research Centre. The views expressed herein do not necessarily represent those of Lacuna Fund, its Steering Committee, its funders, or Meridian Institute. |
llm-book/jsnli | 2023-06-19T12:32:29.000Z | [
"size_categories:100K<n<1M",
"language:ja",
"license:cc-by-sa-4.0",
"region:us"
] | llm-book | null | null | null | 0 | 70 | ---
language:
- ja
size_categories:
- 100K<n<1M
license:
- cc-by-sa-4.0
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 97491392
num_examples: 533005
- name: validation
num_bytes: 712792
num_examples: 3916
download_size: 44931163
dataset_size: 98204184
---
# Dataset Card for llm-book/jsnli
書籍『大規模言語モデル入門』で使用する [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?日本語SNLI(JSNLI)データセット) のデータセットです。
JSNLI Version 1.1 のデータセットのうち、フィルタリング後の訓練セット (train_w_filtering) と検証セット (dev) を使用しています。
## Licence
CC BY-SA 4.0
|
awettig/Pile-Books3-0.5B-6K-opt | 2023-07-10T19:38:57.000Z | [
"region:us"
] | awettig | null | null | null | 1 | 70 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6500959920
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1711566471
dataset_size: 6565905612
---
# Dataset Card for "Pile-Books3-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HAERAE-HUB/csatqa | 2023-09-10T17:12:24.000Z | [
"task_categories:multiple-choice",
"language:ko",
"region:us"
] | HAERAE-HUB | CSAT-QA | \ | null | 6 | 70 | ---
dataset_info:
features:
- name: test_name
dtype: string
- name: question_number
dtype: int64
- name: context
dtype: string
- name: question
dtype: string
- name: gold
dtype: int64
- name: option#1
dtype: string
- name: option#2
dtype: string
- name: option#3
dtype: string
- name: option#4
dtype: string
- name: option#5
dtype: string
- name: Category
dtype: string
- name: Human_Peformance
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4220807
num_examples: 936
download_size: 1076028
dataset_size: 4220807
task_categories:
- multiple-choice
language:
- ko
---
# Dataset Card for "CSAT-QA"
## Dataset Summary
The field of Korean Language Processing is experiencing a surge in interest,
illustrated by the introduction of open-source models such as Polyglot-Ko and proprietary models like HyperClova.
Yet, as the development of larger and superior language models accelerates, evaluation methods aren't keeping pace.
Recognizing this gap, we at HAE-RAE are dedicated to creating tailored benchmarks for the rigorous evaluation of these models.
CSAT-QA is a comprehensive collection of 936 multiple choice question answering (MCQA) questions,
manually collected the College Scholastic Ability Test (CSAT), a rigorous Korean University entrance exam.
The CSAT-QA is divided into two subsets: a complete version encompassing all 936 questions,
and a smaller, specialized version used for targeted evaluations.
The smaller subset further diversifies into six distinct categories:
Writing (WR), Grammar (GR), Reading Comprehension: Science (RCS), Reading Comprehension: Social Science (RCSS),
Reading Comprehension: Humanities (RCH), and Literature (LI). Moreover, the smaller subset includes the recorded accuracy of South Korean students,
providing a valuable real-world performance benchmark.
For a detailed explanation of how the CSAT-QA was created
please check out the [accompanying blog post](https://github.com/guijinSON/hae-rae/blob/main/blog/CSAT-QA.md),
and for evaluation check out [LM-Eval-Harness](https://github.com/EleutherAI/lm-evaluation-harness) on github.
## Evaluation Results
| **Models** | **GR** | **LI** | **RCH** | **RCS** | **RCSS** | **WR** | **Average** |
|:-----------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-----------:|
| polyglot-ko-12.8B | 32.0 | 29.73 | 17.14| 10.81 | 21.43 | 18.18 | 21.55|
| gpt-3.5-wo-token | 16.0 | 32.43 | 42.86 | 18.92 | 35.71 | 0.00 | 24.32 |
| gpt-3.5-w-token | 16.0 | 35.14 | 42.86 | 18.92 | 35.71 | 9.09 | 26.29 |
| gpt-4-wo-token | 40.0 | 54.05 | **68.57** | **59.46** | **69.05** | 36.36 | **54.58** |
| gpt-4-w-token | 36.0 | **56.76** | **68.57** | **59.46** | **69.05** | 36.36 | 54.37 |
| Human Performance | **45.41** | 54.38 | 48.7 | 39.93 | 44.54 | **54.0** | 47.83 |
## How to Use
The CSAT-QA includes two subsets. The full version with 936 questions can be downloaded using the following code:
```
from datasets import load_dataset
dataset = load_dataset("EleutherAI/CSAT-QA", "full")
```
A more condensed version, which includes human accuracy data, can be downloaded using the following code:
```
from datasets import load_dataset
import pandas as pd
dataset = load_dataset("EleutherAI/CSAT-QA", "GR") # Choose from either WR, GR, LI, RCH, RCS, RCSS,
```
## Evaluate using LM-Eval-Harness
To evaluate your model simply by using the LM-Eval-Harness by EleutherAI follow the steps below.
1. To install lm-eval from the github repository main branch, run:
```
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
2. To install additional multilingual tokenization and text segmentation packages, you must install the package with the multilingual extra:
```
pip install -e ".[multilingual]"
```
3. Run the evaluation by:
```
python main.py \
--model hf-causal \
--model_args pretrained=EleutherAI/polyglot-ko-1.3b \
--tasks csatqa_wr,csatqa_gr,csatqa_rcs,csatqa_rcss,csatqa_rch,csatqa_li \
--device cuda:0
```
## License
The copyright of this material belongs to the Korea Institute for Curriculum and Evaluation(한국교육과정평가원) and may be used for research purposes only.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
taishi-i/nagisa_stopwords | 2023-08-06T17:58:31.000Z | [
"size_categories:n<1K",
"language:ja",
"license:mit",
"stopwords",
"region:us"
] | taishi-i | Japanese stopwords for nagisa. | null | null | 0 | 70 | ---
license: mit
tags:
- stopwords
pretty_name: stopwords
size_categories:
- n<1K
language:
- ja
---
# Japanese stopwords for nagisa
This is a stopword list of frequently used words in the Japanese language, created according to the tokenization rules of the Japanese text analysis library, [nagisa](https://github.com/taishi-i/nagisa).
This list is constructed by extracting the top 100 most commonly used words from the [CC-100 dataset](https://data.statmt.org/cc-100/) and [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch/).
To access this list of words, simply run the provided program code below.
Please install Huggingface datasets library.
```bash
$ pip install datasets
```
After installing the library, please run the following code next.
```python
from datasets import load_dataset
dataset = load_dataset("taishi-i/nagisa_stopwords")
# the top 100 most commonly used words
words = dataset["nagisa_stopwords"]["words"]
# the part-of-speech list for the top 100 most commonly used words
postags = dataset["nagisa_stopwords"]["postags"]
```
|
dim/oasst_ru | 2023-08-13T14:31:15.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 70 | ---
license: mit
dataset_info:
features:
- name: conversation_ids
sequence: string
- name: conversation_text
sequence: string
- name: status
dtype: string
splits:
- name: train
num_bytes: 7962688
num_examples: 3140
download_size: 2781053
dataset_size: 7962688
---
|
silk-road/Chat-Haruhi-Fusion-A_B | 2023-08-24T16:47:29.000Z | [
"region:us"
] | silk-road | null | null | null | 3 | 70 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: context
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 259951538
num_examples: 66519
download_size: 0
dataset_size: 259951538
---
# Dataset Card for "Chat-Haruhi-Fusion-A_B"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
seara/ru_go_emotions | 2023-08-25T19:13:08.000Z | [
"task_categories:text-classification",
"task_categories:translation",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-classification",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"source_datasets:go_emoti... | seara | null | null | null | 1 | 70 | ---
dataset_info:
- config_name: raw
features:
- name: ru_text
dtype: string
- name: text
dtype: string
- name: id
dtype: string
- name: author
dtype: string
- name: subreddit
dtype: string
- name: link_id
dtype: string
- name: parent_id
dtype: string
- name: created_utc
dtype: float32
- name: rater_id
dtype: int32
- name: example_very_unclear
dtype: bool
- name: admiration
dtype: int32
- name: amusement
dtype: int32
- name: anger
dtype: int32
- name: annoyance
dtype: int32
- name: approval
dtype: int32
- name: caring
dtype: int32
- name: confusion
dtype: int32
- name: curiosity
dtype: int32
- name: desire
dtype: int32
- name: disappointment
dtype: int32
- name: disapproval
dtype: int32
- name: disgust
dtype: int32
- name: embarrassment
dtype: int32
- name: excitement
dtype: int32
- name: fear
dtype: int32
- name: gratitude
dtype: int32
- name: grief
dtype: int32
- name: joy
dtype: int32
- name: love
dtype: int32
- name: nervousness
dtype: int32
- name: optimism
dtype: int32
- name: pride
dtype: int32
- name: realization
dtype: int32
- name: relief
dtype: int32
- name: remorse
dtype: int32
- name: sadness
dtype: int32
- name: surprise
dtype: int32
- name: neutral
dtype: int32
splits:
- name: train
num_bytes: 84388976
num_examples: 211225
download_size: 41128059
dataset_size: 84388976
- config_name: simplified
features:
- name: ru_text
dtype: string
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': admiration
'1': amusement
'2': anger
'3': annoyance
'4': approval
'5': caring
'6': confusion
'7': curiosity
'8': desire
'9': disappointment
'10': disapproval
'11': disgust
'12': embarrassment
'13': excitement
'14': fear
'15': gratitude
'16': grief
'17': joy
'18': love
'19': nervousness
'20': optimism
'21': pride
'22': realization
'23': relief
'24': remorse
'25': sadness
'26': surprise
'27': neutral
- name: id
dtype: string
splits:
- name: train
num_bytes: 10118125
num_examples: 43410
- name: validation
num_bytes: 1261921
num_examples: 5426
- name: test
num_bytes: 1254989
num_examples: 5427
download_size: 7628917
dataset_size: 12635035
configs:
- config_name: raw
data_files:
- split: train
path: raw/train-*
- config_name: simplified
data_files:
- split: train
path: simplified/train-*
- split: validation
path: simplified/validation-*
- split: test
path: simplified/test-*
license: mit
task_categories:
- text-classification
- translation
task_ids:
- multi-class-classification
- multi-label-classification
- sentiment-analysis
- sentiment-classification
language:
- ru
- en
pretty_name: Ru-GoEmotions
size_categories:
- 10K<n<100K
- 100K<n<1M
source_datasets:
- go_emotions
tags:
- emotion-classification
- emotion
- reddit
---
## Description
This dataset is a translation of the Google [GoEmotions](https://github.com/google-research/google-research/tree/master/goemotions) emotion classification dataset.
All features remain unchanged, except for the addition of a new `ru_text` column containing the translated text in Russian.
For the translation process, I used the [Deep translator](https://github.com/nidhaloff/deep-translator) with the Google engine.
You can find all the details about translation, raw `.csv` files and other stuff in this [Github repository](https://github.com/searayeah/ru-goemotions).
For more information also check the official original dataset [card](https://huggingface.co/datasets/go_emotions).
## Id to label
```yaml
0: admiration
1: amusement
2: anger
3: annoyance
4: approval
5: caring
6: confusion
7: curiosity
8: desire
9: disappointment
10: disapproval
11: disgust
12: embarrassment
13: excitement
14: fear
15: gratitude
16: grief
17: joy
18: love
19: nervousness
20: optimism
21: pride
22: realization
23: relief
24: remorse
25: sadness
26: surprise
27: neutral
```
## Label to Russian label
```yaml
admiration: восхищение
amusement: веселье
anger: злость
annoyance: раздражение
approval: одобрение
caring: забота
confusion: непонимание
curiosity: любопытство
desire: желание
disappointment: разочарование
disapproval: неодобрение
disgust: отвращение
embarrassment: смущение
excitement: возбуждение
fear: страх
gratitude: признательность
grief: горе
joy: радость
love: любовь
nervousness: нервозность
optimism: оптимизм
pride: гордость
realization: осознание
relief: облегчение
remorse: раскаяние
sadness: грусть
surprise: удивление
neutral: нейтральность
```
|
yzhuang/autotree_automl_100000_MagicTelescope_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T16:50:49.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 70 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1048362149
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_MagicTelescope_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_instruction_v1_train_100 | 2023-09-12T17:04:55.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 70 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 177041.73335312048
num_examples: 100
- name: validation
num_bytes: 1888548.7582781457
num_examples: 1000
download_size: 1184787
dataset_size: 2065590.4916312662
---
# Dataset Card for "squad_instruction_v1_train_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vitaliy-sharandin/pollution-absolute-variation-co2 | 2023-09-20T16:05:19.000Z | [
"region:us"
] | vitaliy-sharandin | null | null | null | 0 | 70 | ---
dataset_info:
features:
- name: Entity
dtype: string
- name: Code
dtype: string
- name: Annual CO₂ emissions growth (abs)
dtype: float64
- name: Year
dtype: timestamp[ns, tz=UTC]
- name: dt
dtype: timestamp[ns, tz=UTC]
splits:
- name: train
num_bytes: 1295730
num_examples: 28944
download_size: 350866
dataset_size: 1295730
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pollution-absolute-variation-co2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sibozhu/paddington_cn | 2023-10-04T16:34:25.000Z | [
"region:us"
] | sibozhu | null | null | null | 0 | 70 | Entry not found |
EduardoPacheco/dalle-3-LAION-discord | 2023-10-11T00:05:22.000Z | [
"license:apache-2.0",
"region:us"
] | EduardoPacheco | null | null | null | 0 | 70 | ---
license: apache-2.0
dataset_info:
features:
- name: caption
dtype: string
- name: link
dtype: string
- name: message_id
dtype: string
- name: timestamp
dtype: string
splits:
- name: train
num_bytes: 710519.0
num_examples: 1558
download_size: 365120
dataset_size: 710519.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jeopardy | 2023-04-05T10:07:53.000Z | [
"language:en",
"region:us"
] | null | Dataset containing 216,930 Jeopardy questions, answers and other data.
The json file is an unordered list of questions where each question has
'category' : the question category, e.g. "HISTORY"
'value' : integer $ value of the question as string, e.g. "200"
Note: This is "None" for Final Jeopardy! and Tiebreaker questions
'question' : text of question
Note: This sometimes contains hyperlinks and other things messy text such as when there's a picture or video question
'answer' : text of answer
'round' : one of "Jeopardy!","Double Jeopardy!","Final Jeopardy!" or "Tiebreaker"
Note: Tiebreaker questions do happen but they're very rare (like once every 20 years)
'show_number' : int of show number, e.g '4680'
'air_date' : string of the show air date in format YYYY-MM-DD | null | 4 | 69 | ---
language:
- en
paperswithcode_id: null
pretty_name: jeopardy
dataset_info:
features:
- name: category
dtype: string
- name: air_date
dtype: string
- name: question
dtype: string
- name: value
dtype: int32
- name: answer
dtype: string
- name: round
dtype: string
- name: show_number
dtype: int32
splits:
- name: train
num_bytes: 35916080
num_examples: 216930
download_size: 55554625
dataset_size: 35916080
---
# Dataset Card for "jeopardy"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_file/](https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_file/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 12.72 MB
- **Size of the generated dataset:** 36.13 MB
- **Total amount of disk used:** 48.85 MB
### Dataset Summary
Dataset containing 216,930 Jeopardy questions, answers and other data.
The json file is an unordered list of questions where each question has
'category' : the question category, e.g. "HISTORY"
'value' : integer $ value of the question as string, e.g. "200"
Note: This is "None" for Final Jeopardy! and Tiebreaker questions
'question' : text of question
Note: This sometimes contains hyperlinks and other things messy text such as when there's a picture or video question
'answer' : text of answer
'round' : one of "Jeopardy!","Double Jeopardy!","Final Jeopardy!" or "Tiebreaker"
Note: Tiebreaker questions do happen but they're very rare (like once every 20 years)
'show_number' : int of show number, e.g '4680'
'air_date' : string of the show air date in format YYYY-MM-DD
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 12.72 MB
- **Size of the generated dataset:** 36.13 MB
- **Total amount of disk used:** 48.85 MB
An example of 'train' looks as follows.
```
{
"air_date": "2004-12-31",
"answer": "Hattie McDaniel (for her role in Gone with the Wind)",
"category": "EPITAPHS & TRIBUTES",
"question": "'1939 Oscar winner: \"...you are a credit to your craft, your race and to your family\"'",
"round": "Jeopardy!",
"show_number": 4680,
"value": 2000
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `category`: a `string` feature.
- `air_date`: a `string` feature.
- `question`: a `string` feature.
- `value`: a `int32` feature.
- `answer`: a `string` feature.
- `round`: a `string` feature.
- `show_number`: a `int32` feature.
### Data Splits
| name |train |
|-------|-----:|
|default|216930|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | |
offenseval_dravidian | 2023-06-01T14:59:49.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:kn",
"language:ml",
"language:ta",
"l... | null | Offensive language identification in dravidian lanaguages dataset. The goal of this task is to identify offensive language content of the code-mixed dataset of comments/posts in Dravidian Languages ( (Tamil-English, Malayalam-English, and Kannada-English)) collected from social media. | @inproceedings{dravidianoffensive-eacl,
title={Findings of the Shared Task on {O}ffensive {L}anguage {I}dentification in {T}amil, {M}alayalam, and {K}annada},
author={Chakravarthi, Bharathi Raja and
Priyadharshini, Ruba and
Jose, Navya and
M, Anand Kumar and
Mandl, Thomas and
Kumaresan, Prasanna Kumar and
Ponnsamy, Rahul and
V,Hariharan and
Sherly, Elizabeth and
McCrae, John Philip },
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = April,
year = "2021",
publisher = "Association for Computational Linguistics",
year={2021}
} | null | 2 | 69 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- kn
- ml
- ta
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: Offenseval Dravidian
tags:
- offensive-language
dataset_info:
- config_name: tamil
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Not_offensive
'1': Offensive_Untargetede
'2': Offensive_Targeted_Insult_Individual
'3': Offensive_Targeted_Insult_Group
'4': Offensive_Targeted_Insult_Other
'5': not-Tamil
splits:
- name: train
num_bytes: 4214801
num_examples: 35139
- name: validation
num_bytes: 526108
num_examples: 4388
download_size: 5040217
dataset_size: 4740909
- config_name: malayalam
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Not_offensive
'1': Offensive_Untargetede
'2': Offensive_Targeted_Insult_Individual
'3': Offensive_Targeted_Insult_Group
'4': Offensive_Targeted_Insult_Other
'5': not-malayalam
splits:
- name: train
num_bytes: 1944857
num_examples: 16010
- name: validation
num_bytes: 249364
num_examples: 1999
download_size: 2276736
dataset_size: 2194221
- config_name: kannada
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Not_offensive
'1': Offensive_Untargetede
'2': Offensive_Targeted_Insult_Individual
'3': Offensive_Targeted_Insult_Group
'4': Offensive_Targeted_Insult_Other
'5': not-Kannada
splits:
- name: train
num_bytes: 567119
num_examples: 6217
- name: validation
num_bytes: 70147
num_examples: 777
download_size: 678727
dataset_size: 637266
config_names:
- kannada
- malayalam
- tamil
---
# Dataset Card for Offenseval Dravidian
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://competitions.codalab.org/competitions/27654#learn_the_details
- **Repository:** https://competitions.codalab.org/competitions/27654#participate-get_data
- **Paper:** Findings of the Shared Task on {O}ffensive {L}anguage {I}dentification in {T}amil, {M}alayalam, and {K}annada
- **Leaderboard:** https://competitions.codalab.org/competitions/27654#results
- **Point of Contact:** [Bharathi Raja Chakravarthi](mailto:bharathiraja.akr@gmail.com)
### Dataset Summary
Offensive language identification is classification task in natural language processing (NLP) where the aim is to moderate and minimise offensive content in social media. It has been an active area of research in both academia and industry for the past two decades. There is an increasing demand for offensive language identification on social media texts which are largely code-mixed. Code-mixing is a prevalent phenomenon in a multilingual community and the code-mixed texts are sometimes written in non-native scripts. Systems trained on monolingual data fail on code-mixed data due to the complexity of code-switching at different linguistic levels in the text. This shared task presents a new gold standard corpus for offensive language identification of code-mixed text in Dravidian languages (Tamil-English, Malayalam-English, and Kannada-English).
### Supported Tasks and Leaderboards
The goal of this task is to identify offensive language content of the code-mixed dataset of comments/posts in Dravidian Languages ( (Tamil-English, Malayalam-English, and Kannada-English)) collected from social media. The comment/post may contain more than one sentence but the average sentence length of the corpora is 1. Each comment/post is annotated at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios.
### Languages
Code-mixed text in Dravidian languages (Tamil-English, Malayalam-English, and Kannada-English).
## Dataset Structure
### Data Instances
An example from the Tamil dataset looks as follows:
| text | label |
| :------ | :----- |
| படம் கண்டிப்பாக வெற்றி பெற வேண்டும் செம்ம vara level | Not_offensive |
| Avasara patutiya editor uhh antha bullet sequence aa nee soliruka kudathu, athu sollama iruntha movie ku konjam support aa surprise element aa irunthurukum | Not_offensive |
An example from the Malayalam dataset looks as follows:
| text | label |
| :------ | :----- |
| ഷൈലോക്ക് ന്റെ നല്ല ടീസർ ആയിട്ട് പോലും ട്രോളി നടന്ന ലാലേട്ടൻ ഫാൻസിന് കിട്ടിയൊരു നല്ലൊരു തിരിച്ചടി തന്നെ ആയിരിന്നു ബിഗ് ബ്രദർ ന്റെ ട്രെയ്ലർ | Not_offensive |
| Marana mass Ekka kku kodukku oru | Not_offensive |
An example from the Kannada dataset looks as follows:
| text | label |
| :------ | :----- |
| ನಿಜವಾಗಿಯೂ ಅದ್ಭುತ heartly heltidini... plz avrigella namma nimmellara supprt beku | Not_offensive |
| Next song gu kuda alru andre evaga yar comment madidera alla alrru like madi share madi nam industry na next level ge togond hogaona. | Not_offensive |
### Data Fields
Tamil
- `text`: Tamil-English code mixed comment.
- `label`: integer from 0 to 5 that corresponds to these values: "Not_offensive", "Offensive_Untargetede", "Offensive_Targeted_Insult_Individual", "Offensive_Targeted_Insult_Group", "Offensive_Targeted_Insult_Other", "not-Tamil"
Malayalam
- `text`: Malayalam-English code mixed comment.
- `label`: integer from 0 to 5 that corresponds to these values: "Not_offensive", "Offensive_Untargetede", "Offensive_Targeted_Insult_Individual", "Offensive_Targeted_Insult_Group", "Offensive_Targeted_Insult_Other", "not-malayalam"
Kannada
- `text`: Kannada-English code mixed comment.
- `label`: integer from 0 to 5 that corresponds to these values: "Not_offensive", "Offensive_Untargetede", "Offensive_Targeted_Insult_Individual", "Offensive_Targeted_Insult_Group", "Offensive_Targeted_Insult_Other", "not-Kannada"
### Data Splits
| | train | validation |
|-----------|------:|-----------:|
| Tamil | 35139 | 4388 |
| Malayalam | 16010 | 1999 |
| Kannada | 6217 | 777 |
## Dataset Creation
### Curation Rationale
There is an increasing demand for offensive language identification on social media texts which are largely code-mixed. Code-mixing is a prevalent phenomenon in a multilingual community and the code-mixed texts are sometimes written in non-native scripts. Systems trained on monolingual data fail on code-mixed data due to the complexity of code-switching at different linguistic levels in the text.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
Youtube users
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
This work is licensed under a [Creative Commons Attribution 4.0 International Licence](http://creativecommons.org/licenses/by/4.0/.)
### Citation Information
```
@article{chakravarthi-etal-2021-lre,
title = "DravidianCodeMix: Sentiment Analysis and Offensive Language Identification Dataset for Dravidian Languages in Code-Mixed Text",
author = "Chakravarthi, Bharathi Raja and
Priyadharshini, Ruba and
Muralidaran, Vigneshwaran and
Jose, Navya and
Suryawanshi, Shardul and
Sherly, Elizabeth and
McCrae, John P",
journal={Language Resources and Evaluation},
publisher={Springer}
}
```
```
@inproceedings{dravidianoffensive-eacl,
title={Findings of the Shared Task on {O}ffensive {L}anguage {I}dentification in {T}amil, {M}alayalam, and {K}annada},
author={Chakravarthi, Bharathi Raja and
Priyadharshini, Ruba and
Jose, Navya and
M, Anand Kumar and
Mandl, Thomas and
Kumaresan, Prasanna Kumar and
Ponnsamy, Rahul and
V,Hariharan and
Sherly, Elizabeth and
McCrae, John Philip },
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = April,
year = "2021",
publisher = "Association for Computational Linguistics",
year={2021}
}
```
```
@inproceedings{hande-etal-2020-kancmd,
title = "{K}an{CMD}: {K}annada {C}ode{M}ixed Dataset for Sentiment Analysis and Offensive Language Detection",
author = "Hande, Adeep and
Priyadharshini, Ruba and
Chakravarthi, Bharathi Raja",
booktitle = "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.peoples-1.6",
pages = "54--63",
abstract = "We introduce Kannada CodeMixed Dataset (KanCMD), a multi-task learning dataset for sentiment analysis and offensive language identification. The KanCMD dataset highlights two real-world issues from the social media text. First, it contains actual comments in code mixed text posted by users on YouTube social media, rather than in monolingual text from the textbook. Second, it has been annotated for two tasks, namely sentiment analysis and offensive language detection for under-resourced Kannada language. Hence, KanCMD is meant to stimulate research in under-resourced Kannada language on real-world code-mixed social media text and multi-task learning. KanCMD was obtained by crawling the YouTube, and a minimum of three annotators annotates each comment. We release KanCMD 7,671 comments for multitask learning research purpose.",
}
```
```
@inproceedings{chakravarthi-etal-2020-corpus,
title = "Corpus Creation for Sentiment Analysis in Code-Mixed {T}amil-{E}nglish Text",
author = "Chakravarthi, Bharathi Raja and
Muralidaran, Vigneshwaran and
Priyadharshini, Ruba and
McCrae, John Philip",
booktitle = "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources association",
url = "https://www.aclweb.org/anthology/2020.sltu-1.28",
pages = "202--210",
abstract = "Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.",
language = "English",
ISBN = "979-10-95546-35-1",
}
```
```
@inproceedings{chakravarthi-etal-2020-sentiment,
title = "A Sentiment Analysis Dataset for Code-Mixed {M}alayalam-{E}nglish",
author = "Chakravarthi, Bharathi Raja and
Jose, Navya and
Suryawanshi, Shardul and
Sherly, Elizabeth and
McCrae, John Philip",
booktitle = "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources association",
url = "https://www.aclweb.org/anthology/2020.sltu-1.25",
pages = "177--184",
abstract = "There is an increasing demand for sentiment analysis of text from social media which are mostly code-mixed. Systems trained on monolingual data fail for code-mixed data due to the complexity of mixing at different levels of the text. However, very few resources are available for code-mixed data to create models specific for this data. Although much research in multilingual and cross-lingual sentiment analysis has used semi-supervised or unsupervised methods, supervised methods still performs better. Only a few datasets for popular languages such as English-Spanish, English-Hindi, and English-Chinese are available. There are no resources available for Malayalam-English code-mixed data. This paper presents a new gold standard corpus for sentiment analysis of code-mixed text in Malayalam-English annotated by voluntary annotators. This gold standard corpus obtained a Krippendorff{'}s alpha above 0.8 for the dataset. We use this new corpus to provide the benchmark for sentiment analysis in Malayalam-English code-mixed texts.",
language = "English",
ISBN = "979-10-95546-35-1",
}
```
### Contributions
Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset. |
xed_en_fi | 2023-06-01T14:59:50.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_catego... | null | A multilingual fine-grained emotion dataset. The dataset consists of human annotated Finnish (25k) and English sentences (30k). Plutchik’s
core emotions are used to annotate the dataset with the addition of neutral to create a multilabel multiclass
dataset. The dataset is carefully evaluated using language-specific BERT models and SVMs to
show that XED performs on par with other similar datasets and is therefore a useful tool for
sentiment analysis and emotion detection. | @inproceedings{ohman2020xed,
title={XED: A Multilingual Dataset for Sentiment Analysis and Emotion Detection},
author={{\"O}hman, Emily and P{\"a}mies, Marc and Kajava, Kaisla and Tiedemann, J{\"o}rg},
booktitle={The 28th International Conference on Computational Linguistics (COLING 2020)},
year={2020}
} | null | 6 | 69 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
- fi
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1K<n<10K
source_datasets:
- extended|other-OpenSubtitles2016
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
- multi-label-classification
- sentiment-classification
paperswithcode_id: xed
pretty_name: XedEnglishFinnish
dataset_info:
- config_name: en_annotated
features:
- name: sentence
dtype: string
- name: labels
sequence:
class_label:
names:
'0': neutral
'1': anger
'2': anticipation
'3': disgust
'4': fear
'5': joy
'6': sadness
'7': surprise
'8': trust
splits:
- name: train
num_bytes: 1018485
num_examples: 17528
download_size: 2421235
dataset_size: 1018485
- config_name: en_neutral
features:
- name: sentence
dtype: string
- name: labels
dtype:
class_label:
names:
'0': neutral
'1': anger
'2': anticipation
'3': disgust
'4': fear
'5': joy
'6': sadness
'7': surprise
'8': trust
splits:
- name: train
num_bytes: 401129
num_examples: 9675
download_size: 2421235
dataset_size: 401129
- config_name: fi_annotated
features:
- name: sentence
dtype: string
- name: labels
sequence:
class_label:
names:
'0': neutral
'1': anger
'2': anticipation
'3': disgust
'4': fear
'5': joy
'6': sadness
'7': surprise
'8': trust
splits:
- name: train
num_bytes: 756224
num_examples: 14449
download_size: 2421235
dataset_size: 756224
- config_name: fi_neutral
features:
- name: sentence
dtype: string
- name: labels
dtype:
class_label:
names:
'0': neutral
'1': anger
'2': anticipation
'3': disgust
'4': fear
'5': joy
'6': sadness
'7': surprise
'8': trust
splits:
- name: train
num_bytes: 427499
num_examples: 10794
download_size: 2421235
dataset_size: 427499
config_names:
- en_annotated
- en_neutral
- fi_annotated
- fi_neutral
---
# Dataset Card for xed_english_finnish
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [Github](https://github.com/Helsinki-NLP/XED)
- **Paper:** [Arxiv](https://arxiv.org/abs/2011.01612)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is the XED dataset. The dataset consists of emotion annotated movie subtitles from OPUS. We use Plutchik's 8 core emotions to annotate. The data is multilabel. The original annotations have been sourced for mainly English and Finnish.
For the English data we used Stanford NER (named entity recognition) (Finkel et al., 2005) to replace names and locations with the tags: [PERSON] and [LOCATION] respectively.
For the Finnish data, we replaced names and locations using the Turku NER corpus (Luoma et al., 2020).
### Supported Tasks and Leaderboards
Sentiment Classification, Multilabel Classification, Multilabel Classification, Intent Classification
### Languages
English, Finnish
## Dataset Structure
### Data Instances
```
{ "sentence": "A confession that you hired [PERSON] ... and are responsible for my father's murder."
"labels": [1, 6] # anger, sadness
}
```
### Data Fields
- sentence: a line from the dataset
- labels: labels corresponding to the emotion as an integer
Where the number indicates the emotion in ascending alphabetical order: anger:1, anticipation:2, disgust:3, fear:4, joy:5, sadness:6, surprise:7, trust:8, with neutral:0 where applicable.
### Data Splits
For English:
Number of unique data points: 17528 ('en_annotated' config) + 9675 ('en_neutral' config)
Number of emotions: 8 (+neutral)
For Finnish:
Number of unique data points: 14449 ('fi_annotated' config) + 10794 ('fi_neutral' config)
Number of emotions: 8 (+neutral)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License: Creative Commons Attribution 4.0 International License (CC-BY)
### Citation Information
@inproceedings{ohman2020xed,
title={XED: A Multilingual Dataset for Sentiment Analysis and Emotion Detection},
author={{\"O}hman, Emily and P{\`a}mies, Marc and Kajava, Kaisla and Tiedemann, J{\"o}rg},
booktitle={The 28th International Conference on Computational Linguistics (COLING 2020)},
year={2020}
}
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset. |
GEM/conversational_weather | 2022-10-24T15:30:13.000Z | [
"task_categories:table-to-text",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"data-to-text",
"region:us"
] | GEM | The Conversational Weather dataset is designed for generation of responses to weather queries based on a structured input data. The input allows specifying data attributes such as dates, times, locations, weather conditions, and errors, and also offers control over structure of response through discourse relations such as join, contrast, and justification. | @inproceedings{balakrishnan-etal-2019-constrained,
title = "Constrained Decoding for Neural {NLG} from Compositional Representations in Task-Oriented Dialogue",
author = "Balakrishnan, Anusha and
Rao, Jinfeng and
Upasani, Kartikeya and
White, Michael and
Subba, Rajen",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1080",
doi = "10.18653/v1/P19-1080",
pages = "831--844"
} | null | 1 | 69 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
pretty_name: conversational_weather
tags:
- data-to-text
---
# Dataset Card for GEM/conversational_weather
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/facebookresearch/TreeNLG
- **Paper:** https://aclanthology.org/P19-1080
- **Leaderboard:** N/A
- **Point of Contact:** Kartikeya Upasani
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/conversational_weather).
### Dataset Summary
The purpose of this dataset is to assess how well a model can learn a template-like structure in a very low data setting. The task here is to produce a response to a weather-related query. The reply is further specified through the data attributes and discourse structure in the input. The output contains both the lexicalized text and discourse markers for attributes (e.g., `_ARG_TEMP_ 34`).
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/conversational_weather')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/conversational_weather).
#### paper
[ACL Anthology](https://aclanthology.org/P19-1080)
#### authors
Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, Rajen Subba (Facebook Conversational AI)
## Dataset Overview
### Where to find the Data and its Documentation
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/facebookresearch/TreeNLG)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL Anthology](https://aclanthology.org/P19-1080)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{balakrishnan-etal-2019-constrained,
title = "Constrained Decoding for Neural {NLG} from Compositional Representations in Task-Oriented Dialogue",
author = "Balakrishnan, Anusha and
Rao, Jinfeng and
Upasani, Kartikeya and
White, Michael and
Subba, Rajen",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1080",
doi = "10.18653/v1/P19-1080",
pages = "831--844"
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Kartikeya Upasani
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
kart@fb.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-nc-4.0: Creative Commons Attribution Non Commercial 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
This dataset is intended to help develop conversational agents that exhibit human-like properties such as matching the framing of the response with the query or contrasting relevant data attributes.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Producing a text that is a response to a weather query as per the discourse structure and data attributes specified in the input meaning representation.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`industry`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Facebook
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, Rajen Subba (Facebook Conversational AI)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Facebook
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Vipul Raheja (Grammarly)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `gem_id`: (string): GEM-formatted row id
- `id`: (string): Row id in the original data
- `user_query`: (string): Natural language weather query from humans
- `tree_str_mr`: (string): Synthetically-added user context (datetime and location) in the form of a tree-structured MR
- `response`: (string): A tree-structured annotation of the response.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{'gem_id': 'weather-train-11',
'id': '1108963',
'synthetic_user_context': '[__DG_INFORM__ [__ARG_TASK__ get_forecast ] '
'[__ARG_TEMP__ 37 ] [__ARG_TEMP_UNIT__ fahrenheit ] '
'[__ARG_CLOUD_COVERAGE__ partly cloudy ] '
'[__ARG_DATE_TIME__ [__ARG_COLLOQUIAL__ currently ] '
'] [__ARG_LOCATION__ [__ARG_CITY__ Oakland ] '
'[__ARG_COUNTRY__ United States ] [__ARG_REGION__ '
'California ] ] ] [__DG_INFORM__ [__ARG_TASK__ '
'get_forecast ] [__ARG_TEMP_SUMMARY__ mid 40s ] '
'[__ARG_DATE_TIME_RANGE__ [__ARG_COLLOQUIAL__ This '
'afternoon ] ] [__ARG_LOCATION__ [__ARG_CITY__ '
'Oakland ] [__ARG_COUNTRY__ United States ] '
'[__ARG_REGION__ California ] ] ] [__DG_INFORM__ '
'[__ARG_TASK__ get_forecast ] '
'[__ARG_CLOUD_COVERAGE__ mostly sunny ] '
'[__ARG_DATE_TIME_RANGE__ [__ARG_COLLOQUIAL__ This '
'afternoon ] ] [__ARG_LOCATION__ [__ARG_CITY__ '
'Oakland ] [__ARG_COUNTRY__ United States ] '
'[__ARG_REGION__ California ] ] ]',
'tree_str_mr': "[__DG_INFORM__ It's [__ARG_DATE_TIME__ [__ARG_COLLOQUIAL__ "
'currently ] ] [__ARG_CLOUD_COVERAGE__ partly cloudy ] and '
'[__ARG_TEMP__ __ARG_TEMP__ ] [__ARG_TEMP_UNIT__ '
'__ARG_TEMP_UNIT__ ] [__ARG_LOCATION__ in [__ARG_CITY__ '
'__ARG_CITY__ ] , [__ARG_REGION__ __ARG_REGION__ ] , '
'[__ARG_COUNTRY__ __ARG_COUNTRY__ ] ] . ] [__DG_INFORM__ '
'[__ARG_DATE_TIME_RANGE__ [__ARG_COLLOQUIAL__ This afternoon ] '
"] , it'll be [__ARG_CLOUD_COVERAGE__ mostly sunny ] ] "
'[__DG_INFORM__ with temperatures in the [__ARG_TEMP_SUMMARY__ '
'mid <number> ] ]',
'user_query': 'Show weather forecast for Oakland, CA. '}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
- Standard Splits: Train/Validation/Test
- Additional Split: Disc_Test (a more challenging subset of the test set that contains discourse relations)
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The test set contains 3,121 examples, of which 1.1K (35%) have unique MRs that have never been seen in the training set.
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
```
{'gem_id': 'weather-train-13333', 'data_id': '1260610', 'user_query': 'Sundown', 'tree_str_mr': '[__DG_INFORM__ [__ARG_TASK__ get_weather_attribute ] [__ARG_SUNSET_TIME_DATE_TIME__ [__ARG_TIME__ 05:04 PM ] ] ]', 'response': '[__DG_INFORM__ The sun will go down at [__ARG_SUNSET_TIME_DATE_TIME__ [__ARG_TIME__ __ARG_TIME__ ] ] ]'}
```
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
The dataset was curated to develop a weather bot that exhibits human-like properties such as matching the framing of the response with the query or contrasting relevant data attributes.
The dataset offers rich tree-based meaning representations that offer fine-grained control over the response, e.g. by specifying which two attributes are to be contrasted. The natural language input queries are also provided to model the coherence of the response based on the input. The output response is annotated with the input meaning components using special bracketing tokens, which enables developing new techniques such as constrained decoding to improve quality of output responses
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
Adequately expressing CONTRAST and JUSTIFY discourse relations with appropriate grouping of arguments; adequately generalizing to many combinations of arguments.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`data points removed`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
The original repo contained a challenge set disc_test.tsv, which is a subset of the test set consisting of discourse relations (CONTRAST and JUSTIFY) , but also contained JOIN relations.
This discrepancy has been rectified in the GEM version. The rectified version has been added in the `challenge_sets`
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Adequately expressing CONTRAST and JUSTIFY discourse relations with appropriate grouping of arguments; adequately generalizing to many combinations of arguments.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`, `Other: Other Metrics`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
Tree accuracy: It measures whether the tree structure in the prediction matches that of the input MR exactly (modulo repeated arguments that need only appear once).
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
Automatic metrics are evaluated on the raw model predictions (which have de-lexicalized fields):
* Tree accuracy: Measures whether the tree structure in the prediction matches that of the input MR exactly.
* BLEU-4: A word overlap metric commonly used for evaluating NLG systems.
Authors also performed human evaluation studies by asking annotators to evaluate the quality of responses produced by different models. Annotators provided binary ratings on the following dimensions:
• Grammaticality: Measures fluency of the responses.
• Correctness: Measures semantic correctness of the responses.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset was curated to develop a weather bot that exhibits human-like properties such as matching the framing of the response with the query or contrasting relevant data attributes. To achieve this, the dataset contains rich tree-structured meaning representations that are specified using several data arguments and discourse acts, the input natural language queries, and annotations for the responses.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Producing a text that is a response to a weather query as per the discourse structure and data attributes specified in the input meaning representation.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Crowdsourced`, `Machine-generated`
#### Where was it crowdsourced?
<!-- info: If crowdsourced, where from? -->
<!-- scope: periscope -->
`Other crowdworker platform`
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The dataset is focused on the weather domain: Weather was the first successful case of NLG put into production back in the 80s (Reiter & Dale, 1997). This domain offers significant complexity for NLG. Weather forecast summaries in particular can be very long, and require reasoning over several disjoint pieces of information.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by crowdworker
#### Data Preprocessing
<!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
<!-- scope: microscope -->
Please refer to Appendix D of the original paper for details.
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
hybrid
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
Please refer to Appendix C of the original paper for details.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
Annotation was done as work for hire and contains no PII.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
Data is simulated and not specific to annotator.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
unsure
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
Grammatical evaluations performed with the data to date have used norms from informal Standard American English. These prescriptive notions of grammaticality potentially serve to perpetuate systemic power imbalances as they’re conveyed by language.
Since the data only contains informal Standard American English, its use to train a model may not be appropriate depending on the potential use case.
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
Annotation was done as work for hire and contains no PII. Annotated data is simulated and not specific to annotator.
### Licenses
### Known Technical Limitations
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
An imperfect model used to convey actual weather data could mislead users about weather conditions?
|
gabtan99/pex-conversations | 2022-10-20T19:34:29.000Z | [
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:tl",
"language:fil",
"license:unknown",
"multi-turn",
"region:us"
] | gabtan99 | null | null | null | 1 | 69 | ---
language:
- tl
- fil
license:
- unknown
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- sequence-modeling
task_ids:
- dialogue-modeling
- language-modeling
pretty_name: PEx Conversations
tags:
- multi-turn
---
# PinoyExchange (PEx) Conversations Dataset
# Summary
PEx Conversations is a dataset composed of collected threads from PinoyExchange.com (Consisting of Tagalog, English, or Taglish responses).
The corpus consists of 45K total scraped threads from 8 subforums. The data only consists of the user message which means any images, videos, links, or any embdedded html are not collected in the scraping process. All characters have been transliterated to its closest ASCII representation, and unicode errors were fixed.
# Format
The data is categorized per category. The objects in the list is composed of:
* category - the category of the threads
* conversations - the list of threads
The threads inside conversations have recursive structure consisting of the following:
* text - This is the response/reply/prompt
* replies - This is a list of the replies to this prompt. The replies inside the list has a structure with the same text and replies component.
# Subforum percentages
The amount of data per subforum are as follows:
* Small Talk - 5K conversations with 1.16M utterances
* Food & Drinks - 8.2K conversations with 273K utterances
* Health & Wellness - 6.3K conversations with 93K utterances
* Body & Fitness - 3.9K conversations with 94K utterances
* Home & Garden - 3.6K conversations with 71K utterances
* Style & Fashion - 9.7K conversations with 197K utterances
* Travel & Leisure - 7.3K conversations with 431K utterances
* Visas & Immigration - 1.1K conversations with 99K utterances
# Model Research
[Tagalog DialoGPT](https://huggingface.co/gabtan99/dialogpt-tagalog-medium) |
patriziobellan/PET | 2023-07-05T14:03:24.000Z | [
"task_categories:token-classification",
"size_categories:n<1K",
"language:en",
"license:mit",
"Business Process Management",
"NLP",
"ML",
"DL",
"arxiv:2203.04860",
"region:us"
] | patriziobellan | Abstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, gateways, actors and flow information. We present our new resource, including a detailed overview of the annotation schema and guidelines, as well as a variety of baselines to benchmark the difficulty and challenges of business process extraction from text. | @inproceedings{DBLP:conf/bpm/BellanADGP22,
author = {Patrizio Bellan and
Han van der Aa and
Mauro Dragoni and
Chiara Ghidini and
Simone Paolo Ponzetto},
editor = {Cristina Cabanillas and
Niels Frederik Garmann{-}Johnsen and
Agnes Koschmider},
title = {{PET:} An Annotated Dataset for Process Extraction from Natural Language
Text Tasks},
booktitle = {Business Process Management Workshops - {BPM} 2022 International Workshops,
M{\"{u}}nster, Germany, September 11-16, 2022, Revised Selected
Papers},
series = {Lecture Notes in Business Information Processing},
volume = {460},
pages = {315--321},
publisher = {Springer},
year = {2022},
url = {https://doi.org/10.1007/978-3-031-25383-6\_23},
doi = {10.1007/978-3-031-25383-6\_23},
timestamp = {Tue, 14 Feb 2023 09:47:10 +0100},
biburl = {https://dblp.org/rec/conf/bpm/BellanADGP22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{DBLP:conf/aiia/BellanGDPA22,
author = {Patrizio Bellan and
Chiara Ghidini and
Mauro Dragoni and
Simone Paolo Ponzetto and
Han van der Aa},
editor = {Debora Nozza and
Lucia C. Passaro and
Marco Polignano},
title = {Process Extraction from Natural Language Text: the {PET} Dataset and
Annotation Guidelines},
booktitle = {Proceedings of the Sixth Workshop on Natural Language for Artificial
Intelligence {(NL4AI} 2022) co-located with 21th International Conference
of the Italian Association for Artificial Intelligence (AI*IA 2022),
Udine, November 30th, 2022},
series = {{CEUR} Workshop Proceedings},
volume = {3287},
pages = {177--191},
publisher = {CEUR-WS.org},
year = {2022},
url = {https://ceur-ws.org/Vol-3287/paper18.pdf},
timestamp = {Fri, 10 Mar 2023 16:23:01 +0100},
biburl = {https://dblp.org/rec/conf/aiia/BellanGDPA22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 5 | 69 | ---
license: mit
task_categories:
- token-classification
language:
- en
tags:
- Business Process Management
- NLP
- ML
- DL
pretty_name: PET
size_categories:
- n<1K
---
# PET: A NEW DATASET FOR PROCESS EXTRACTION FROM TEXT
# Dataset Card for PET
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- [Annotation Guidelines](#annotationguidelines)
- [Update](#updates)
- [Loading data](#loadingdata)
## Dataset Description
- **Homepage:** https://pdi.fbk.eu/pet-dataset/
- **Paper:** https://arxiv.org/abs/2203.04860
- **Point of Contact:** [Patrizio Bellan](pbellan@fbk.eu)
### Dataset Summary
Abstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, actors, activity data, gateways and their conditions. We present our new resource to benchmark the difficulty and challenges of business process extraction from text.
### Supported Tasks and Leaderboards
- Token Classification
- Named Entity Recognition
- Relations Extraction
### Languages
English
## Dataset Structure
Test set to beanchmark *Business Process Extraction from Text* approaches.
### Data Instances
#### Token Classification
For each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, an integer representing the number of the sentence, a list of tokens representing the words of the sentence instance, and a list of *ner tags* (in IOB2 format) representing the annotation of process elements of the sentence.
Below, an example of data instance.
```
{
"document name":"doc-1.1",
"sentence-ID":1,
"tokens":["Whenever","the","sales","department","receives","an","order",",","a","new","process","instance","is","created","."],
"ner-tags":["O","B-Actor","I-Actor","I-Actor","B-Activity","B-Activity Data","I-Activity Data","O","O","O","O","O","O","O","O"]
}
```
#### Relations Extraction
For each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, a list of tokens representing the words of the document instance, a list of interger representing the words position within each sentence of the document instance, a list of *ner tags* (in IOB2 format) representing the annotation of the token, a list of sentence id representing for each token the number of the sentence, and a list of relations of the document.
Below, an example of data instance.
```
{
"document name": "doc-1.1",
"tokens": ["A", "small", "company",...],
"tokens-IDs": [0, 1, 2, ...],
"ner_tags": ["O", "O", "O", ...],
"sentence-IDs": [0, 0, 0, ...],
"relations": {
"source-head-sentence-ID": [1, 1, 1, ...],
"source-head-word-ID": [4, 4, 4, ...],
"relation-type": ["uses", "flow", "actor recipient", ...],
"target-head-sentence-ID": [1, 2, 1,...],
"target-head-word-ID": [5, 9, 1, ...]
}
}
```
### Data Fields
#### Token Classification
- *document name*: a string used to represent the name of the document.
- *sentence-ID*: an integer (starting from 0) representing the number of the sentence within the document.
- *tokens*: a list of string representing the words of the sentence
- *ner-tags*: a list of string representing the annotation for each word.
The allowed **ner-tags** are:
- **O**: An O tag indicates that a token belongs to no chunk.
- **B-Actor**: This tag indicates the beginning of an *Actor* chunk.
- **I-Actor**: This tag indicates that the tag is inside an *Actor* chunk.
- **B-Activity**: This tag indicates the beginning of an *Activity* chunk.
- **I-Activity**: This tag indicates that the tag is inside an *Activity* chunk.
- **B-Activity Data**: This tag indicates the beginning of an *Activity Data* chunk.
- **I-Activity Data**: This tag indicates that the tag is inside an *Activity Data* chunk.
- **B-Further Specification**: This tag indicates the beginning of a *Further Specification* chunk.
- **I-Further Specification**: This tag indicates that the tag is inside a *Further Specification* chunk.
- **B-XOR Gateway**: This tag indicates the beginning of a *XOR Gateway* chunk.
- **I-XOR Gateway**: This tag indicates that the tag is inside a *XOR Gateway* chunk.
- **B-Condition Specification**: This tag indicates the beginning of a *Condition Specification* chunk.
- **I-Condition Specification**: This tag indicates that the tag is inside a *Condition Specification* chunk.
- **B-AND Gateway**: This tag indicates the beginning of an *AND Gateway* chunk.
- **I-AND Gateway**: This tag indicates that the tag is inside an *AND Gateway* chunk.
To have a complete explanation of each process element tag please refer to the [research paper](https://arxiv.org/abs/2203.04860) and the [annotation guidelines](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf).
### Relations Extraction
- *document name*: a string used to represent the name of the document.
- *tokens*: a list of string representing the words of the document
- *tokens-IDs*: a list of interger representing the word position within a sentence.
- *ner_tags*: a list of string representing the annotation for each word. (see ner-tags above)
- *sentence-IDs*: a list of interger representing the sentence number for each word of the document.
- *relations*:: a list of document relations.
- *source-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the source entity.
- *source-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the source entity.
- *relation-type*: a list of relation tags.
- *target-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the target entity.
- *target-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the target entity.
For instance, a relation is defined by the instances of *source-head-sentence-ID*, *source-head-word-ID*, *relation-type*, *target-head-sentence-ID*, and *target-head-word-ID* at the same index position.
In the following example, the first relation of the first document is shown:
```python
document_1=modelhub_dataset['test'][0]
relation = {
'source-head-sentence-ID': document_1['relations']['source-head-sentence-ID'][0],
'source-head-word-ID': document_1['relations']['source-head-word-ID'][0],
'relation-type': document_1['relations']['relation-type'][0],
'target-head-sentence-ID': document_1['relations']['target-head-sentence-ID'][0],
'target-head-word-ID': document_1['relations']['target-head-sentence-ID'][0],
}
print(relation)
```
the output is:
```python
{'relation-type': 'uses',
'source-head-sentence-ID': 1,
'source-head-word-ID': 4,
'target-head-sentence-ID': 1,
'target-head-word-ID': 1}
```
That means:
the entity in sentence number *1*, starting at the token position *4* has a *uses* relation with the entity in sentence number *1* starting at token position *1*
### Data Splits
The data was not split. It contains the test set only.
## Dataset Creation
### Curation Rationale
Although there is a long tradition of work in NLP on extracting entities and relations from text to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management.
### Source Data
#### Initial Data Collection and Normalization
The dataset construction process has been split in five main phases:
1. Text pre-processing. As the first operation, we check the content of each document and we tokenized it. This initial check was necessary since some of the original texts were automatically translated into English by the authors of the dataset. The translations were never validated, indeed, several errors have been found and fixed.
2. Text Annotation. Each text has been annotated by using the [guidelines](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf). The team was composed by five annotators with high expertise in BPMN. Each document has been assigned to three experts that were in change of identifying all the elements and flows with each document. In this phase, we used the the Inception tool to support annotators.
3. Automatic annotation fixing. After the second phase, we ran an automatic procedure relying on a rule-based script to automatically fix annotations that were not compliant with the guidelines. For example, if a modal verb was erroneously included in the annotation of an Activity, the procedure removed it from the annotation. Another example is the missing of the article within an annotation related to an Actor. In this case, the script included it in the annotation. This phase allowed to remove possible annotation errors and to obtain annotations compliant with the guidelines.
4. Agreement Computation. Here, we computed, on the annotation provided by the experts, the agreement scores for each process element and for each relation between process elements pair adopting the methodology proposed in [Hripcsak *et al.*](https://academic.oup.com/jamia/article/12/3/296/812057?login=true). We measured the agreement in terms of the F1 measure because, besides being straightforward to calculate, it is directly interpretable. Note that chance-corrected measures like *k* approach the F1-measure as the number of cases that raters agree are negative grows. By following such a methodology, an annotation was considered in agreement among the experts if and only if they capture the same span of words and they assign the same process element tag to the annotation.
5. Reconciliation. The last phase consisted of the mitigation of disagreements within the annotations provided by the experts. The aim of this phase is to obtain a shared and agreed set of gold standard annotations on each text for both entities and relations. Such entities also enable the generation of the related full-connected process model flow that can be rendered by using, but not limited to, a BPMN diagram. During this last phase, among the 47 documents originally included into the dataset, 2 of them were discarded. These texts were not fully annotated by the annotators since they were not be able to completely understand which process elements were actually included in some specific parts of the text. For this reason, the final size of the dataset is 45 textual descriptions of the corresponding process models together with their annotations.
#### Who are the source language producers?
English
### Annotations
#### Annotation process
You can read about the annotation process in the original paper https://arxiv.org/abs/2203.04860
#### Who are the annotators?
Expert Annotators
### Personal and Sensitive Information
No personal or sensitive information issues.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset has no social impact
### Discussion of Biases
No bias found in the dataset
### Other Known Limitations
The *Further specification* and *AND Gateway* elements obtained very poor performance on the baselines proposed in the paper.
The *AND Gateway* is the less represented process elements in this dataset.
The *Further Specification* process element was the most difficult element to annotate.
## Additional Information
### Dataset Curators
- Patrizio Bellan (Fondazione Bruno Kessler, Trento, Italy and Free University of Bozen-Bolzano, Bolzano, Italy)
- Mauro Dragoni (Fondazione Bruno Kessler, Trento, Italy)
- Chiara Ghidini (Fondazione Bruno Kessler, Trento, Italy)
- Han van der Aa (University of Mannheim, Mannheim, Germany)
- Simone Ponzetto (University of Mannheim, Mannheim, Germany)
### Licensing Information
### Citation Information
```
@inproceedings{DBLP:conf/aiia/BellanGDPA22,
author = {Patrizio Bellan and
Chiara Ghidini and
Mauro Dragoni and
Simone Paolo Ponzetto and
Han van der Aa},
editor = {Debora Nozza and
Lucia C. Passaro and
Marco Polignano},
title = {Process Extraction from Natural Language Text: the {PET} Dataset and
Annotation Guidelines},
booktitle = {Proceedings of the Sixth Workshop on Natural Language for Artificial
Intelligence {(NL4AI} 2022) co-located with 21th International Conference
of the Italian Association for Artificial Intelligence (AI*IA 2022),
Udine, November 30th, 2022},
series = {{CEUR} Workshop Proceedings},
volume = {3287},
pages = {177--191},
publisher = {CEUR-WS.org},
year = {2022},
url = {https://ceur-ws.org/Vol-3287/paper18.pdf},
timestamp = {Fri, 10 Mar 2023 16:23:01 +0100},
biburl = {https://dblp.org/rec/conf/aiia/BellanGDPA22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{DBLP:conf/bpm/BellanADGP22,
author = {Patrizio Bellan and
Han van der Aa and
Mauro Dragoni and
Chiara Ghidini and
Simone Paolo Ponzetto},
editor = {Cristina Cabanillas and
Niels Frederik Garmann{-}Johnsen and
Agnes Koschmider},
title = {{PET:} An Annotated Dataset for Process Extraction from Natural Language
Text Tasks},
booktitle = {Business Process Management Workshops - {BPM} 2022 International Workshops,
M{\"{u}}nster, Germany, September 11-16, 2022, Revised Selected
Papers},
series = {Lecture Notes in Business Information Processing},
volume = {460},
pages = {315--321},
publisher = {Springer},
year = {2022},
url = {https://doi.org/10.1007/978-3-031-25383-6\_23},
doi = {10.1007/978-3-031-25383-6\_23},
timestamp = {Tue, 14 Feb 2023 09:47:10 +0100},
biburl = {https://dblp.org/rec/conf/bpm/BellanADGP22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [Patrizio Bellan](https://pdi.fbk.eu/bellan/) for adding this dataset.
#### <a name="updates"></a>Update
- v1.0.0: Added token classification task
- v1.0.1: Added extraction relation task
- v1.1.0: Fixed minor errors, fixed performs relations
Version 1.1.0 cab be found [here](https://huggingface.co/datasets/patriziobellan/PETv11)
## <a name="annotationguidelines"></a>Annotation Guidelines
### Inception Schema
The inception schema can be found [here](https://pdi.fbk.eu/pet/inception-schema.json)
### Annotation Guidelines
The Annotation guidelines and procedures adopted to annotate the PET dataset can be downloaded [here](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf)
### Article
The article can be downloaded [here]({https://ceur-ws.org/Vol-3287/paper18.pdf})
### Python Interface
A Python interface (beta version) to interact with the dataset can be found [here](https://pypi.org/project/petdatasetreader/)
You can find the **BASELINES**, the annotation data, and a graphical interface to visualize predictions [here](https://github.com/patriziobellan86/PETbaselines)
### Benchmarks
A Python benchmarking procedure package to test approaches on the PET dataset ca be found [here](https://pypi.org/project/petbenchmarks/)
## <a name="loadingdata"></a>Loading data
### Token-classification task
```python
from datasets import load_dataset
modelhub_dataset = load_dataset("patriziobellan/PET", name='token-classification')
```
### Relations-extraction task
```python
from datasets import load_dataset
modelhub_dataset = load_dataset("patriziobellan/PET", name='relations-extraction')
```
|
ChristophSchuhmann/improved_aesthetics_6.25plus | 2022-08-10T11:33:42.000Z | [
"region:us"
] | ChristophSchuhmann | null | null | null | 7 | 69 | Entry not found |
Muennighoff/P3 | 2022-11-03T15:15:39.000Z | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:en",
"license:apache-2.0",
"region:us"
] | Muennighoff | null | null | null | 10 | 69 | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: P3
size_categories:
- 100M<n<1B
task_categories:
- other
---
This is a repreprocessed version of [P3](https://huggingface.co/datasets/bigscience/P3) with any updates that have been made to the P3 datasets since the release of the original P3. It is used for the finetuning of [bloomz-p3](https://huggingface.co/bigscience/bloomz-p3) & [mt0-xxl-p3](https://huggingface.co/bigscience/mt0-xxl-p3). The script is available [here](https://github.com/bigscience-workshop/bigscience/blob/638e66e40395dbfab9fa08a662d43b317fb2eb38/data/p3/prepare_p3.py).
|
malteos/germeval2017 | 2022-11-30T13:49:08.000Z | [
"language:de",
"region:us"
] | malteos | null | null | null | 0 | 69 | ---
language:
- de
---
# Germeval Task 2017: Shared Task on Aspect-based Sentiment in Social Media Customer Feedback
In the connected, modern world, customer feedback is a valuable source for insights on the quality of products or services. This feedback allows other customers to benefit from the experiences of others and enables businesses to react on requests, complaints or recommendations. However, the more people use a product or service, the more feedback is generated, which results in the major challenge of analyzing huge amounts of feedback in an efficient, but still meaningful way.
Thus, we propose a shared task on automatically analyzing customer reviews about “Deutsche Bahn” - the german public train operator with about two billion passengers each year.
Example:
> “RT @XXX: Da hört jemand in der Bahn so laut ‘700 Main Street’ durch seine Kopfhörer, dass ich mithören kann. :( :( :(“
As shown in the example, insights from reviews can be derived on different granularities. The review contains a general evaluation of the travel (The customer disliked the travel). Furthermore, the review evaluates a dedicated aspect of the train travel (“laut” → customer did not like the noise level).
Consequently, we frame the task as aspect-based sentiment analysis with four sub tasks:
## Data format
```
ID <tab> Text <tab> Relevance <tab> Sentiment <tab> Aspect:Polarity (whitespace separated)
```
## Links
- http://ltdata1.informatik.uni-hamburg.de/germeval2017/
- https://sites.google.com/view/germeval2017-absa/
## How to cite
```bibtex
@inproceedings{germevaltask2017,
title = {{GermEval 2017: Shared Task on Aspect-based Sentiment in Social Media Customer Feedback}},
author = {Michael Wojatzki and Eugen Ruppert and Sarah Holschneider and Torsten Zesch and Chris Biemann},
year = {2017},
booktitle = {Proceedings of the GermEval 2017 – Shared Task on Aspect-based Sentiment in Social Media Customer Feedback},
address={Berlin, Germany},
pages={1--12}
}
``` |
keremberke/satellite-building-segmentation | 2023-01-18T09:41:34.000Z | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"Aerial",
"Logistics",
"Construction",
"Damage Risk",
"Other",
"region:us"
] | keremberke | null | @misc{ buildings-instance-segmentation_dataset,
title = { Buildings Instance Segmentation Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation } },
url = { https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-18 },
} | null | 5 | 69 | ---
task_categories:
- image-segmentation
tags:
- roboflow
- roboflow2huggingface
- Aerial
- Logistics
- Construction
- Damage Risk
- Other
---
<div align="center">
<img width="640" alt="keremberke/satellite-building-segmentation" src="https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['building']
```
### Number of Images
```json
{'train': 6764, 'valid': 1934, 'test': 967}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/satellite-building-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1](https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ buildings-instance-segmentation_dataset,
title = { Buildings Instance Segmentation Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation } },
url = { https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:09 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 9665 images.
Buildings are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
|
animelover/genshin-impact-images | 2023-07-13T05:49:11.000Z | [
"region:us"
] | animelover | null | null | null | 15 | 69 | Entry not found |
AbderrahmanSkiredj1/arabml_darija_english_parallel_dataset | 2023-03-19T15:06:36.000Z | [
"region:us"
] | AbderrahmanSkiredj1 | null | null | null | 1 | 69 | Entry not found |
mstz/speeddating | 2023-04-07T14:54:21.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"speeddating",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | null | 0 | 69 | ---
language:
- en
tags:
- speeddating
- tabular_classification
- binary_classification
pretty_name: Speed dating
size_categories:
- 1K<n<10K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- dating
---
# Speed dating
The [Speed dating dataset](https://www.openml.org/search?type=data&sort=nr_of_likes&status=active&id=40536) from OpenML.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| dating | Binary classification | Will the two date? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/speeddating")["train"]
```
# Features
|**Features** |**Type** |
|---------------------------------------------------|---------|
|`is_dater_male` |`int8` |
|`dater_age` |`int8` |
|`dated_age` |`int8` |
|`age_difference` |`int8` |
|`dater_race` |`string` |
|`dated_race` |`string` |
|`are_same_race` |`int8` |
|`same_race_importance_for_dater` |`float64`|
|`same_religion_importance_for_dater` |`float64`|
|`attractiveness_importance_for_dated` |`float64`|
|`sincerity_importance_for_dated` |`float64`|
|`intelligence_importance_for_dated` |`float64`|
|`humor_importance_for_dated` |`float64`|
|`ambition_importance_for_dated` |`float64`|
|`shared_interests_importance_for_dated` |`float64`|
|`attractiveness_score_of_dater_from_dated` |`float64`|
|`sincerity_score_of_dater_from_dated` |`float64`|
|`intelligence_score_of_dater_from_dated` |`float64`|
|`humor_score_of_dater_from_dated` |`float64`|
|`ambition_score_of_dater_from_dated` |`float64`|
|`shared_interests_score_of_dater_from_dated` |`float64`|
|`attractiveness_importance_for_dater` |`float64`|
|`sincerity_importance_for_dater` |`float64`|
|`intelligence_importance_for_dater` |`float64`|
|`humor_importance_for_dater` |`float64`|
|`ambition_importance_for_dater` |`float64`|
|`shared_interests_importance_for_dater` |`float64`|
|`self_reported_attractiveness_of_dater` |`float64`|
|`self_reported_sincerity_of_dater` |`float64`|
|`self_reported_intelligence_of_dater` |`float64`|
|`self_reported_humor_of_dater` |`float64`|
|`self_reported_ambition_of_dater` |`float64`|
|`reported_attractiveness_of_dated_from_dater` |`float64`|
|`reported_sincerity_of_dated_from_dater` |`float64`|
|`reported_intelligence_of_dated_from_dater` |`float64`|
|`reported_humor_of_dated_from_dater` |`float64`|
|`reported_ambition_of_dated_from_dater` |`float64`|
|`reported_shared_interests_of_dated_from_dater` |`float64`|
|`dater_interest_in_sports` |`float64`|
|`dater_interest_in_tvsports` |`float64`|
|`dater_interest_in_exercise` |`float64`|
|`dater_interest_in_dining` |`float64`|
|`dater_interest_in_museums` |`float64`|
|`dater_interest_in_art` |`float64`|
|`dater_interest_in_hiking` |`float64`|
|`dater_interest_in_gaming` |`float64`|
|`dater_interest_in_clubbing` |`float64`|
|`dater_interest_in_reading` |`float64`|
|`dater_interest_in_tv` |`float64`|
|`dater_interest_in_theater` |`float64`|
|`dater_interest_in_movies` |`float64`|
|`dater_interest_in_concerts` |`float64`|
|`dater_interest_in_music` |`float64`|
|`dater_interest_in_shopping` |`float64`|
|`dater_interest_in_yoga` |`float64`|
|`interests_correlation` |`float64`|
|`expected_satisfaction_of_dater` |`float64`|
|`expected_number_of_likes_of_dater_from_20_people` |`int8` |
|`expected_number_of_dates_for_dater` |`int8` |
|`dater_liked_dated` |`float64`|
|`probability_dated_wants_to_date` |`float64`|
|`already_met_before` |`int8` |
|`dater_wants_to_date` |`int8` |
|`dated_wants_to_date` |`int8` |
|
Ali-fb/martin_valen_dataset | 2023-03-29T05:15:50.000Z | [
"region:us"
] | Ali-fb | null | null | null | 0 | 69 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 82775.0
num_examples: 10
download_size: 82229
dataset_size: 82775.0
---
# Dataset Card for "martin_valen_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/toxicity | 2023-04-16T18:03:37.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"toxicity",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | null | null | 0 | 69 | ---
language:
- en
tags:
- toxicity
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Toxicity
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- encoding
- income
- income-no race
- race
license: cc
---
# Adult
The [Toxicity dataset](https://archive-beta.ics.uci.edu/dataset/728/toxicity) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
The dataset includes 171 molecules designed for functional domains of a core clock protein, CRY1, responsible for generating circadian rhythm.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| toxicity | Binary classification | Is the molecule toxic? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/toxicity")["train"]
``` |
mstz/ozone | 2023-04-16T17:57:24.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"ozone",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | @misc{misc_ozone_level_detection_172,
author = {Zhang,Kun, Fan,Wei & Yuan,XiaoJing},
title = {{Ozone Level Detection}},
year = {2008},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5NG6W}}
} | null | 0 | 69 | ---
language:
- en
tags:
- ozone
- tabular_classification
- binary_classification
pretty_name: Ozone
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- 8hr
- 1hr
license: cc
---
# Ozone
The [Ozone dataset](https://archive.ics.uci.edu/ml/datasets/Ozone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| 8hr | Binary classification | Is there an ozone layer?|
| 1hr | Binary classification | Is there an ozone layer?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/ozone", "8hr")["train"]
``` |
mstz/pol | 2023-04-16T17:58:01.000Z | [
"task_categories:tabular-classification",
"size_categories:10k<n<100K",
"language:en",
"license:cc",
"pol",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | null | null | 0 | 69 | ---
language:
- en
tags:
- pol
- tabular_classification
- binary_classification
- UCI
pretty_name: Pol
size_categories:
- 10k<n<100K
task_categories:
- tabular-classification
configs:
- pol
license: cc
---
# Pol
The [Pol dataset](https://www.openml.org/search?type=data&sort=runs&id=151&status=active) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| pol | Binary classification | Has the pol cost gone up?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/pol", "pol")["train"]
``` |
ruanchaves/rerelem | 2023-04-14T11:01:24.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|harem",
"language:pt",
"relation extraction,",
"region:us"
] | ruanchaves | null | 2 | 69 | ---
annotations_creators:
- expert-generated
language:
- pt
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: ReRelEM
size_categories:
- 1K<n<10K
source_datasets:
- extended|harem
tags:
- relation extraction,
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for ReRelEM
## Dataset Description
- **Paper:** [Relation detection between named entities: report of a shared task](https://aclanthology.org/W09-2421.pdf)
- **Point of Contact:** [Hugo Gonçalo Oliveira](hroliv@dei.uc.pt)
### Dataset Summary
The ReRelEM dataset is designed for the detection and classification of relations between named entities in Portuguese text. It contains 2226 training, 701 validation, and 805 test instances. Each instance contains two sentences with two entities enclosed by the tags [E1] and [E2]. The dataset provides a fourfold relationship classification: identity, included-in, located-in, and other (which is detailed into twenty different relations).
It's important to note that, although we maintained more than 99% of the original instances, this is not a full representation of the original ReRelEM dataset.
The dataset was split into train, validation, and test sets, after which 21 instances with relation types not included in the training set were dropped from the test set. Furthermore, 7 instances from the original dataset that had formatting errors and could not be resolved into post-processed records were also dropped.
### Supported Tasks and Leaderboards
- Relation extraction: The primary task of this dataset is to classify relations between named entities.
### Languages
- Portuguese
## Dataset Structure
### Data Instances
An example data instance from the dataset:
```json
{
"docid": "cver",
"sentence1": "O PRESIDENTE Sarkozy abriu a Conferência de Dadores realizada em Paris com uma frase grandiloquente sobre a necessidade urgente de criar um Estado palestiniano no fim de 2008 . O Presidente ou é mentiroso ou finge-se ignorante, ou as duas coisas. Depois do falhanço esperado da cimeira de Annapolis , um modo de [E2]Condoleezza Rice[/E2] salvar a face e de a Administração | Administração americana e a Europa continuarem a fingir que estão interessadas em resolver o conflito israelo-palestiniano e de lavarem as mãos de tudo o resto, Sarkozy não pode ignorar que o momento para pronunciamentos débeis é o menos adequado. Tony Blair , depois de ter minado todo o processo de paz do Médio Oriente ao ordenar a invasão do Iraque de braço dado com [E1]Bush[/E1] , continua a emitir piedades deste género, e diz que está na altura de resolver o problema e que ele pode ser resolvido. Blair não sabe o que diz.",
"sentence2": "nan",
"label": "relacao_profissional",
"same_text": true
}
```
### Data Fields
- `docid`: Document ID of both sentences (sentence1 and sentence2)
- `sentence1`: The first sentence with an entity span enclosed by the tags [E1] and [/E1]
- `sentence2`: The second sentence with an entity span enclosed by the tags [E2] and [/E2]
- `label`: The type of relation between the entities
- `same_text`: True if both entity spans appear in the same sentence. If True, `sentence2` will be empty.
### Data Splits
| | train | validation | test |
|--------|-------|------------|------|
| Instances | 2226 | 701 | 805 |
The dataset was divided in a manner that ensured sentences from the same document did not appear in more than one split.
### Citation Information
```bibtex
@inproceedings{freitas2009relation,
title={Relation detection between named entities: report of a shared task},
author={Freitas, Cl{\\'a}udia and Santos, Diana and Mota, Cristina and Oliveira, Hugo Gon{\\c{c}}alo and Carvalho, Paula},
booktitle={Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009)},
pages={129--137},
year={2009}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | ||
TrainingDataPro/anti-spoofing_replay | 2023-09-14T16:49:15.000Z | [
"task_categories:video-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"legal",
"code",
"region:us"
] | TrainingDataPro | The dataset consists of 40,000 videos and selfies with unique people. 15,000
attack replays from 4,000 unique devices. 10,000 attacks with A4 printouts and
10,000 attacks with cut-out printouts. | @InProceedings{huggingface:dataset,
title = {anti-spoofing_replay},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 69 | ---
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
language:
- en
tags:
- finance
- legal
- code
dataset_info:
features:
- name: live_video_id
dtype: string
- name: phone
dtype: string
- name: video_file
dtype: string
- name: phone_video_playback
dtype: string
- name: worker_id
dtype: string
splits:
- name: train
num_bytes: 5063
num_examples: 30
download_size: 735628032
dataset_size: 5063
---
# Anti-Spoofing dataset: replay
The dataset consists of 40,000 videos and selfies with unique people. 15,000 attack replays from 4,000 unique devices. 10,000 attacks with A4 printouts and 10,000 attacks with cut-out printouts.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=anti-spoofing_replay) to discuss your requirements, learn about the price and buy the dataset.
# File with the extension .csv
includes the following information for each media file:
- **live_video_id**: the unique identifier of the "Antispoofing Live" video
- **phone**: the device used to capture the replay video,
- **link**: the URL to access the replay video,
- **phone_video_payback**: the device used to play the "Antispoofing Live" video,
- **worker_id**: the identifier of the person who provided the media file,
# Folder "img" with media files
- containg all the photos and videos
- which correspond to the data in the .csv file
**How it works**: *go to the first folder and you will make sure that it contains media files taken by a person whose parameters are specified in the first line of the .csv file.*
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=anti-spoofing_replay) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
clarin-knext/quora-pl | 2023-06-07T08:16:00.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | null | 0 | 69 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
jed351/Traditional-Chinese-Common-Crawl-Filtered | 2023-07-20T23:09:09.000Z | [
"language:zh",
"region:us"
] | jed351 | null | null | null | 3 | 69 | ---
language:
- zh
---
# Traditional Chinese C4
### Dataset Summary
Data obtained from 2023-14 Common Crawl.
Downloaded and processed using [code](https://github.com/jedcheng/c4-dataset-script) based on another [project](https://github.com/shjwudp/c4-dataset-script) attempting to recreate the C4 dataset.
The resultant dataset contains both simplified and traditional Chinese, which could be found [here](https://huggingface.co/datasets/jed351/Chinese-Common-Crawl-Filtered).
It was then filtered using a [modified list](https://github.com/jedcheng/c4-dataset-script/blob/master/SC_filter/SC_list.txt) of simplified Chinese characters to obtain this traditional Chinese dataset.
I would like to acknowledge computational resources and support provided by the Imperial College Research Computing Service (http://doi.org/10.14469/hpc/2232) |
dim/lima | 2023-08-20T18:14:11.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 69 | ---
license: mit
dataset_info:
features:
- name: conversations
sequence: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2906937
num_examples: 1030
download_size: 1677611
dataset_size: 2906937
---
|
yzhuang/autotree_automl_100000_house_16H_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T09:07:23.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 69 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 948866270
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_house_16H_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mychen76/ds_receipts_v2_train | 2023-09-20T21:38:03.000Z | [
"region:us"
] | mychen76 | null | null | null | 0 | 69 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 102670815.483
num_examples: 1137
download_size: 102731891
dataset_size: 102670815.483
---
# Dataset Card for "ds_receipts_v2_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sarahpann/gsm8k_small_test | 2023-09-23T20:21:36.000Z | [
"region:us"
] | sarahpann | null | null | null | 0 | 69 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 109109
num_examples: 200
download_size: 64934
dataset_size: 109109
---
# Dataset Card for "gsm8k_small_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
insub/imdb_prefix15_POSITIVE_DPO_gpt2-large-imdb-FT_siebert_sentiment-roberta-large-english | 2023-10-02T08:11:16.000Z | [
"region:us"
] | insub | null | null | null | 0 | 69 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: x
dtype: string
- name: y_w
dtype: string
- name: y_l
dtype: string
splits:
- name: train
num_bytes: 7443134
num_examples: 12500
- name: test
num_bytes: 7446505
num_examples: 12500
download_size: 9240734
dataset_size: 14889639
---
# Dataset Card for "imdb_prefix15_POSITIVE_DPO_gpt2-large-imdb-FT_siebert_sentiment-roberta-large-english"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ar_res_reviews | 2023-01-25T14:26:30.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"license:unknown",
"region:us"
] | null | Dataset of 8364 restaurant reviews scrapped from qaym.com in Arabic for sentiment analysis | @InProceedings{10.1007/978-3-319-18117-2_2,
author="ElSahar, Hady
and El-Beltagy, Samhaa R.",
editor="Gelbukh, Alexander",
title="Building Large Arabic Multi-domain Resources for Sentiment Analysis",
booktitle="Computational Linguistics and Intelligent Text Processing",
year="2015",
publisher="Springer International Publishing",
address="Cham",
pages="23--34",
isbn="978-3-319-18117-2"
} | null | 3 | 68 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: ArRestReviews
dataset_info:
features:
- name: polarity
dtype:
class_label:
names:
'0': negative
'1': positive
- name: text
dtype: string
- name: restaurant_id
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 3617097
num_examples: 8364
download_size: 3503230
dataset_size: 3617097
---
# Dataset Card for ArRestReviews
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Large Arabic Sentiment Analysis Resources](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces)
- **Repository:** [Large Arabic Sentiment Analysis Resources](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces)
- **Paper:** [ Building Large Arabic Multi-domain Resources for Sentiment Analysis](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces/blob/master/Paper%20-%20Building%20Large%20Arabic%20Multi-domain%20Resources%20for%20Sentiment%20Analysis.pdf)
- **Point of Contact:** [hady elsahar](hadyelsahar@gmail.com)
### Dataset Summary
Dataset of 8364 restaurant reviews from qaym.com in Arabic for sentiment analysis
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises of the following:
- "polarity": which is a string value of either 0 or 1 indicating the sentiment around the review
- "text": is the review plain text of a restaurant in Arabic
- "restaurant_id": the restaurant ID on the website
- "user_id": the user ID on the website
example:
```
{
'polarity': 0, # negative
'restaurant_id': '1412',
'text': 'عادي جدا مامن زود',
'user_id': '21294'
}
```
### Data Fields
- "polarity": is a string value of either 0 or 1 indicating the sentiment around the review
- "text": is the review plain text of a restaurant in Arabic
- "restaurant_id": the restaurant ID on the website (string)
- "user_id": the user ID on the website (string)
### Data Splits
The dataset is not split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
Contains 8364 restaurant reviews from qaym.com
#### Who are the source language producers?
From tweeter.
### Annotations
The polarity field provides a label of 1 or -1 pertaining to the sentiment of the review
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Discussion of Social Impact and Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@InProceedings{10.1007/978-3-319-18117-2_2,
author="ElSahar, Hady
and El-Beltagy, Samhaa R.",
editor="Gelbukh, Alexander",
title="Building Large Arabic Multi-domain Resources for Sentiment Analysis",
booktitle="Computational Linguistics and Intelligent Text Processing",
year="2015",
publisher="Springer International Publishing",
address="Cham",
pages="23--34",
isbn="978-3-319-18117-2"
}
### Contributions
Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset. |
squad_v1_pt | 2023-04-05T13:40:41.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pt",
"license:mit",
"arxiv:1606.052... | null | Portuguese translation of the SQuAD dataset. The translation was performed automatically using the Google Cloud API. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | null | 4 | 68 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- pt
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: null
pretty_name: SquadV1Pt
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 85323237
num_examples: 87599
- name: validation
num_bytes: 11265474
num_examples: 10570
download_size: 39532595
dataset_size: 96588711
---
# Dataset Card for "squad_v1_pt"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/nunorc/squad-v1.1-pt](https://github.com/nunorc/squad-v1.1-pt)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 39.53 MB
- **Size of the generated dataset:** 96.72 MB
- **Total amount of disk used:** 136.25 MB
### Dataset Summary
Portuguese translation of the SQuAD dataset. The translation was performed automatically using the Google Cloud API.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 39.53 MB
- **Size of the generated dataset:** 96.72 MB
- **Total amount of disk used:** 136.25 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [0],
"text": ["Saint Bernadette Soubirous"]
},
"context": "\"Arquitetonicamente, a escola tem um caráter católico. No topo da cúpula de ouro do edifício principal é uma estátua de ouro da ...",
"id": "5733be284776f41900661182",
"question": "A quem a Virgem Maria supostamente apareceu em 1858 em Lourdes, na França?",
"title": "University_of_Notre_Dame"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | train | validation |
| ------- | ----: | ---------: |
| default | 87599 | 10570 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
TristanBehrens/js-fakes-4bars | 2022-01-03T15:53:23.000Z | [
"region:us"
] | TristanBehrens | null | null | null | 9 | 68 | # JSFakes (Dr. Tristan Behrens).
This is a tokenized version of the JS-Fakes dataset by Omar Peracha. The original dataset can be found here:
[js-fakes.git](https://github.com/omarperacha/js-fakes.git)
The representation is four tracks with four bars per track.
## Purpose.
This dataset is a good starting point for Music Generation. You could train GPT-2 on the samples to compose music.
## Contact.
Find me on [LinkedIn](https://www.linkedin.com/in/dr-tristan-behrens-734967a2/) and say hello.
If you find and issue or have a feature request, please contact me.
Please be so kind and like this dataset if you find it useful. |
eugenesiow/Set5 | 2022-10-21T03:59:16.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:other",
"other-image-super-resolution",
"region:us"
] | eugenesiow | Set5 is a evaluation dataset with 5 RGB images for the image super resolution task. | @article{bevilacqua2012low,
title={Low-complexity single-image super-resolution based on nonnegative neighbor embedding},
author={Bevilacqua, Marco and Roumy, Aline and Guillemot, Christine and Alberi-Morel, Marie Line},
year={2012},
publisher={BMVA press}
} | null | 0 | 68 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language: []
license:
- other
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: Set5
tags:
- other-image-super-resolution
---
# Dataset Card for Set5
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: http://people.rennes.inria.fr/Aline.Roumy/results/SR_BMVC12.html
- **Repository**: https://huggingface.co/datasets/eugenesiow/Set5
- **Paper**: http://people.rennes.inria.fr/Aline.Roumy/publi/12bmvc_Bevilacqua_lowComplexitySR.pdf
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
Set5 is a evaluation dataset with 5 RGB images for the image super resolution task. The 5 images of the dataset are (“baby”, “bird”, “butterfly”, “head”, “woman”).
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/Set5', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/Set5_HR/baby.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/Set5_LR_x2/baby.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|
|-------|---:|
|bicubic_x2|5|
|bicubic_x3|5|
|bicubic_x4|5|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Bevilacqua et al.](http://people.rennes.inria.fr/Aline.Roumy/results/SR_BMVC12.html)
### Licensing Information
Academic use only.
### Citation Information
```bibtex
@article{bevilacqua2012low,
title={Low-complexity single-image super-resolution based on nonnegative neighbor embedding},
author={Bevilacqua, Marco and Roumy, Aline and Guillemot, Christine and Alberi-Morel, Marie Line},
year={2012},
publisher={BMVA press}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
|
tne | 2023-01-25T15:04:06.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:2109.12085",
"region:us"
] | null | TNE is an NLU task, which focus on relations between noun phrases (NPs) that can be mediated via prepositions.
The dataset contains 5,497 documents, annotated exhaustively with all possible links between the NPs in each document. | @article{tne,
author = {Elazar, Yanai and Basmov, Victoria and Goldberg, Yoav and Tsarfaty, Reut},
title = "{Text-based NP Enrichment}",
journal = {Transactions of the Association for Computational Linguistics},
year = {2022},
} | null | 0 | 68 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- document-retrieval
pretty_name: Text-based NP Enrichment
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: nps
list:
- name: text
dtype: string
- name: first_char
dtype: int32
- name: last_char
dtype: int32
- name: first_token
dtype: int32
- name: last_token
dtype: int32
- name: id
dtype: string
- name: np_relations
list:
- name: anchor
dtype: string
- name: complement
dtype: string
- name: preposition
dtype:
class_label:
names:
'0': about
'1': for
'2': with
'3': from
'4': among
'5': by
'6': 'on'
'7': at
'8': during
'9': of
'10': member(s) of
'11': in
'12': after
'13': under
'14': to
'15': into
'16': before
'17': near
'18': outside
'19': around
'20': between
'21': against
'22': over
'23': inside
- name: complement_coref_cluster_id
dtype: string
- name: coref
list:
- name: id
dtype: string
- name: members
sequence: string
- name: np_type
dtype:
class_label:
names:
'0': standard
'1': time/date/measurement
'2': idiomatic
- name: metadata
struct:
- name: annotators
struct:
- name: coref_worker
dtype: int32
- name: consolidator_worker
dtype: int32
- name: np-relations_worker
sequence: int32
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 41308170
num_examples: 3988
- name: validation
num_bytes: 5495419
num_examples: 500
- name: test
num_bytes: 2203716
num_examples: 500
- name: test_ood
num_bytes: 2249352
num_examples: 509
download_size: 14194578
dataset_size: 51256657
---
# Dataset Card for Text-based NP Enrichment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://yanaiela.github.io/TNE/
- **Repository:** https://github.com/yanaiela/TNE
- **Paper:** https://arxiv.org/abs/2109.12085
- **Leaderboard:** [TNE OOD](https://leaderboard.allenai.org/tne-ood/submissions/public)
[TNE](https://leaderboard.allenai.org/tne/submissions/public)
- **Point of Contact:** [Yanai Elazar](mailto:yanaiela@gmail.com)
### Dataset Summary
Text-based NP Enrichment (TNE) is a natural language understanding (NLU) task, which focus on relations between noun phrases (NPs) that can be mediated via prepositions. The dataset contains 5,497 documents, annotated exhaustively with all possible links between the NPs in each document.
The main data comes from WikiNews, which is used for train/dev/test. We also collected an additional set of 509 documents to serve as out of distribution (OOD) data points, from the Book Corpus, IMDB reviews and Reddit.
### Supported Tasks and Leaderboards
The data contain both the main data for the TNE task, as well as coreference resolution data.
There are two leaderboards for the TNE data, one for the standard test set, and another one for the OOD test set:
- [TNE Leaderboard](https://leaderboard.allenai.org/tne/submissions/public)
- [TNE OOD Leaderboard](https://leaderboard.allenai.org/tne-ood/submissions/public)
### Languages
The text in the dataset is in English, as spoken in the different domains we include. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
The original files are in a jsonl format, containing a dictionary of a single document, in each line.
Each document contain a different amount of labels, due to the different amount of NPs.
The test and ood splits come without the annotated labels.
### Data Fields
A document consists of:
* `id`: a unique identifier of a document, beginning with `r` and followed by a number
* `text`: the text of the document. The title and subtitles (if exists) are separated with two new lines. The paragraphs
are separated by a single new line.
* `tokens`: a list of string, containing the tokenized tokens
* `nps`: a list of dictionaries, containing the following entries:
* `text`: the text of the np
* `start_index`: an integer indicating the starting index in the text
* `end_index`: an integer indicating the ending index in the text
* `start_token`: an integer indicating the first token of the np out of the tokenized tokens
* `end_token`: an integer indicating the last token of the np out of the tokenized tokens
* `id`: the id of the np
* `np_relations`: these are the relation labels of the document. It is a list of dictionaries, where each
dictionary contains:
* `anchor`: the id of the anchor np
* `complement`: the id of the complement np
* `preposition`: the preposition that links between the anchor and the complement. This can take one out of 24 pre-defined preposition (23 + member(s)-of)
* `complement_coref_cluster_id`: the coreference id, which the complement is part of.
* `coref`: the coreference labels. It contains a list of dictionaries, where each dictionary contains:
* `id`: the id of the coreference cluster
* `members`: the ids of the nps members of such cluster
* `np_type`: the type of cluster. It can be either
* `standard`: regular coreference cluster
* `time/date/measurement`: a time / date / measurement np. These will be singletons.
* `idiomatic`: an idiomatic expression
* `metadata`: metadata of the document. It contains the following:
* `annotators`: a dictionary with anonymized annotators id
* `coref_worker`: the coreference worker id
* `consolidator_worker`: the consolidator worker id
* `np-relations_worker`: the np relations worker id
* `url`: the url where the document was taken from (not always existing)
* `source`: the original file name where the document was taken from
### Data Splits
The dataset is spread across four files, for the four different splits: train, dev, test and test_ood.
Additional details on the data statistics can be found in the [paper](https://arxiv.org/abs/2109.12085)
## Dataset Creation
### Curation Rationale
TNE was build as a new task for language understanding, focusing on extracting relations between nouns, moderated by prepositions.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was created by Yanai Elazar, Victoria Basmov, Yoav Goldberg, Reut Tsarfaty, during work done at Bar-Ilan University, and AI2.
### Licensing Information
The data is released under the MIT license.
### Citation Information
```bibtex
@article{tne,
author = {Elazar, Yanai and Basmov, Victoria and Goldberg, Yoav and Tsarfaty, Reut},
title = "{Text-based NP Enrichment}",
journal = {Transactions of the Association for Computational Linguistics},
year = {2022},
}
```
### Contributions
Thanks to [@yanaiela](https://github.com/yanaiela), who is also the first author of the paper, for adding this dataset. |
ashraq/hotel-reviews | 2022-10-27T17:24:29.000Z | [
"region:us"
] | ashraq | null | null | null | 1 | 68 | ---
dataset_info:
features:
- name: review_date
dtype: string
- name: hotel_name
dtype: string
- name: review
dtype: string
splits:
- name: train
num_bytes: 15043294
num_examples: 93757
download_size: 6100544
dataset_size: 15043294
---
# Dataset Card for "hotel-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Data was obtained from [here](https://www.kaggle.com/datasets/jiashenliu/515k-hotel-reviews-data-in-europe) |
range3/cc100-ja | 2023-02-04T05:43:32.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:ja",
"license:unknown",
"region:us"
] | range3 | null | null | null | 6 | 68 | ---
license: unknown
task_categories:
- text-generation
- fill-mask
language:
- ja
---
# range3/cc100-ja
This dataset consists of parquet files from the cc100 dataset with only the Japanese language extracted and sharded.
このデータセットは、cc100データセットの日本語のみを抽出し、シャーディングしたparquetファイルで構成されます。 |
johnrobinsn/alpaca-cleaned | 2023-03-30T08:42:40.000Z | [
"region:us"
] | johnrobinsn | null | null | null | 0 | 68 | Entry not found |
mstz/phoneme | 2023-04-11T00:14:47.000Z | [
"task_categories:tabular-classification",
"size_categories:10k<n<100K",
"language:en",
"phoneme",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | null | 0 | 68 | ---
language:
- en
tags:
- phoneme
- tabular_classification
- binary_classification
pretty_name: Phoneme
size_categories:
- 10k<n<100K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- phoneme
---
# Phoneme
The [Phoneme dataset](https://www.openml.org/search?type=data&sort=runs&id=1489&status=active) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| phoneme | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/phoneme")["train"]
```
|
mstz/ipums | 2023-04-17T09:54:47.000Z | [
"task_categories:tabular-classification",
"language:en",
"ipums",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_ipums_census_database_127,
author = {Ruggles,Steven & Sobek,Matthew},
title = {{IPUMS Census Database}},
year = {1999},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5BG63}}
} | null | 0 | 68 | ---
language:
- en
tags:
- ipums
- tabular_classification
- binary_classification
- UCI
pretty_name: Ipums
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- ipums
---
# Ipums
The [Ipums dataset](https://archive-beta.ics.uci.edu/dataset/127/ipums+census+database) from the [UCI repository](https://archive-beta.ics.uci.edu/).
|
mstz/hypo | 2023-05-24T12:27:51.000Z | [
"task_categories:tabular-classification",
"language:en",
"hypo",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | null | 0 | 68 | ---
language:
- en
tags:
- hypo
- tabular_classification
- binary_classification
pretty_name: Hypo
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- hypo
---
# Hypo
The Hypo dataset.
# Configurations and tasks
| **Configuration** | **Task** | **Description**|
|-----------------------|---------------------------|----------------|
| hypo | Multiclass classification.| What kind of hypothyroidism does the patient have? |
| has_hypo | Binary classification.| Does the patient hypothyroidism does the patient have? |
|
yulanfmy/databricks-qa-ja | 2023-05-15T14:55:06.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:ja",
"license:cc-by-sa-3.0",
"region:us"
] | yulanfmy | null | null | null | 2 | 68 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
language:
- ja
size_categories:
- 1K<n<10K
---
# データセット概要
手動で作成したDatabricksに関する質問と回答ペアの日本語データセットです。
- 件数:約1,300件
- 情報源:Databricks HPの日本語ブログやFAQなど、データブリック社員がポストしたQitta記事
https://github.com/yulan-yan/build-your-chat-bot-JP デモに利用したデータです。 |
richardr1126/spider-context-validation | 2023-10-03T20:53:20.000Z | [
"source_datasets:spider",
"language:en",
"license:cc-by-4.0",
"text-to-sql",
"SQL",
"spider",
"validation",
"eval",
"spider-eval",
"region:us"
] | richardr1126 | null | null | null | 0 | 68 | ---
language:
- en
license:
- cc-by-4.0
source_datasets:
- spider
pretty_name: Spider Context Validation
tags:
- text-to-sql
- SQL
- spider
- validation
- eval
- spider-eval
dataset_info:
features:
- name: db_id
dtype: string
- name: question
dtype: string
- name: db_info
dtype: string
- name: ground_truth
dtype: string
---
# Dataset Card for Spider Context Validation
### Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
This dataset was created to validate spider-fine-tuned LLMs with database context.
### Yale Lily Spider Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
### Languages
The text in the dataset is in English.
### Licensing Information
The spider dataset is licensed under
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
### Citation
```
@article{yu2018spider,
title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal={arXiv preprint arXiv:1809.08887},
year={2018}
}
``` |
keirp/open-web-math-dev | 2023-07-18T17:43:41.000Z | [
"language:en",
"region:us"
] | keirp | null | null | null | 1 | 68 | ---
language: en
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 46793390925
num_examples: 2948527
download_size: 23882813026
dataset_size: 46793390925
---
# Dataset Card for "open-web-math-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lamini/bts | 2023-07-24T03:50:41.000Z | [
"region:us"
] | lamini | null | null | null | 1 | 68 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 129862.8
num_examples: 126
- name: test
num_bytes: 14429.2
num_examples: 14
download_size: 50390
dataset_size: 144292.0
---
# Dataset Card for "bts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
svjack/cmmlu_ed | 2023-07-24T06:56:54.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-nc-4.0",
"chinese",
"llm",
"evaluation",
"arxiv:2306.09212",
"region:us"
] | svjack | CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context. | @misc{li2023cmmlu,
title={CMMLU: Measuring massive multitask language understanding in Chinese},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},
year={2023},
eprint={2306.09212},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 68 | ---
license: cc-by-nc-4.0
task_categories:
- multiple-choice
- question-answering
language:
- zh
tags:
- chinese
- llm
- evaluation
pretty_name: CMMLU
size_categories:
- 10K<n<100K
---
# CMMLU: Measuring massive multitask language understanding in Chinese
- **Homepage:** [https://github.com/haonan-li/CMMLU](https://github.com/haonan-li/CMMLU)
- **Repository:** [https://huggingface.co/datasets/haonan-li/cmmlu](https://huggingface.co/datasets/haonan-li/cmmlu)
- **Paper:** [CMMLU: Measuring Chinese Massive Multitask Language Understanding](https://arxiv.org/abs/2306.09212).
## Table of Contents
- [Introduction](#introduction)
- [Leaderboard](#leaderboard)
- [Data](#data)
- [Citation](#citation)
- [License](#license)
## Introduction
CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context.
CMMLU covers a wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels. It includes subjects that require computational expertise, such as physics and mathematics, as well as disciplines within humanities and social sciences.
Many of these tasks are not easily translatable from other languages due to their specific contextual nuances and wording.
Furthermore, numerous tasks within CMMLU have answers that are specific to China and may not be universally applicable or considered correct in other regions or languages.
## Leaderboard
Latest leaderboard is in our [github](https://github.com/haonan-li/CMMLU).
## Data
We provide development and test dataset for each of 67 subjects, with 5 questions in development set and 100+ quesitons in test set.
Each question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer.
Here are two examples:
```
题目:同一物种的两类细胞各产生一种分泌蛋白,组成这两种蛋白质的各种氨基酸含量相同,但排列顺序不同。其原因是参与这两种蛋白质合成的:
A. tRNA种类不同
B. 同一密码子所决定的氨基酸不同
C. mRNA碱基序列不同
D. 核糖体成分不同
答案是:C
```
```
题目:某种植物病毒V是通过稻飞虱吸食水稻汁液在水稻间传播的。稻田中青蛙数量的增加可减少该病毒在水稻间的传播。下列叙述正确的是:
A. 青蛙与稻飞虱是捕食关系
B. 水稻和病毒V是互利共生关系
C. 病毒V与青蛙是寄生关系
D. 水稻与青蛙是竞争关系
答案是:
```
#### Load data
```python
from datasets import load_dataset
cmmlu=load_dataset(r"haonan-li/cmmlu", 'agronomy')
print(cmmlu['test'][0])
```
#### Load all data at once
```python
task_list = ['agronomy', 'anatomy', 'ancient_chinese', 'arts', 'astronomy', 'business_ethics', 'chinese_civil_service_exam', 'chinese_driving_rule', 'chinese_food_culture', 'chinese_foreign_policy', 'chinese_history', 'chinese_literature',
'chinese_teacher_qualification', 'clinical_knowledge', 'college_actuarial_science', 'college_education', 'college_engineering_hydrology', 'college_law', 'college_mathematics', 'college_medical_statistics', 'college_medicine', 'computer_science',
'computer_security', 'conceptual_physics', 'construction_project_management', 'economics', 'education', 'electrical_engineering', 'elementary_chinese', 'elementary_commonsense', 'elementary_information_and_technology', 'elementary_mathematics',
'ethnology', 'food_science', 'genetics', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_geography', 'high_school_mathematics', 'high_school_physics', 'high_school_politics', 'human_sexuality',
'international_law', 'journalism', 'jurisprudence', 'legal_and_moral_basis', 'logical', 'machine_learning', 'management', 'marketing', 'marxist_theory', 'modern_chinese', 'nutrition', 'philosophy', 'professional_accounting', 'professional_law',
'professional_medicine', 'professional_psychology', 'public_relations', 'security_study', 'sociology', 'sports_science', 'traditional_chinese_medicine', 'virology', 'world_history', 'world_religions']
from datasets import load_dataset
cmmlu = {k: load_dataset(r"haonan-li/cmmlu", k) for k in task_list}
```
## Citation
```
@misc{li2023cmmlu,
title={CMMLU: Measuring massive multitask language understanding in Chinese},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},
year={2023},
eprint={2306.09212},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The CMMLU dataset is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|
C-MTEB/Mmarco-reranking | 2023-07-28T07:25:10.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 68 | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
dataset_info:
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: dev
num_bytes: 32794704
num_examples: 100
download_size: 17401514
dataset_size: 32794704
---
# Dataset Card for "Mmarco-reranking"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
izumi-lab/wikipedia-en-20230720 | 2023-07-29T03:06:05.000Z | [
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | izumi-lab | null | null | null | 4 | 68 | ---
dataset_info:
features:
- name: curid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16118978135
num_examples: 6650632
download_size: 9566993111
dataset_size: 16118978135
license: cc-by-sa-3.0
language:
- en
---
# Dataset Card for "wikipedia-en-20230720"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Daoguang/CodeM-Multilinugal-Data | 2023-09-01T02:33:32.000Z | [
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"arxiv:2308.16824",
"region:us"
] | Daoguang | null | null | null | 3 | 68 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: Python
path: "python.json"
- split: JavaScript
path: "js.json"
- split: TypeScript
path: "ts.json"
- split: C
path: "c.json"
- split: Cpp
path: "cpp.json"
- split: Java
path: "java.json"
- split: Go
path: "go.json"
- split: HTML
path: "html.json"
- split: Mixed
path: "mixed.json"
language:
- en
pretty_name: CodeM_data
size_categories:
- 10K<n<100K
---
# CodeM: Can Programming Languages Boost Each Other via Instruction Tuning?
[Paper](https://arxiv.org/pdf/2308.16824.pdf) [GitHub](https://github.com/NL2Code/CodeM/tree/main/data)
## Abstract
When human programmers have mastered a programming language, it would be easier when they learn a new programming language. In this report, we focus on exploring whether programming languages can boost each other during the instruction fine-tuning phase of code large language models. We conduct extensive experiments of 8 popular programming languages (Python, JavaScript, TypeScript, C, C++, Java, Go, HTML) on StarCoder. Results demonstrate that programming languages can significantly improve each other. For example, CodeM-Python 15B trained on Python is able to increase Java by an absolute 17.95% pass@1 on HumanEval-X. More surprisingly, we found that CodeM-HTML 7B trained on the HTML corpus can improve Java by an absolute 15.24% pass@1. Our training data is released at [this https URL](https://huggingface.co/datasets/Daoguang/CodeM-Multilinugal-Data).
## Usage
```python
from datasets import load_dataset
# load CodeM's training data
dataset = load_dataset("Daoguang/CodeM-Multilinugal-Data")
```
## Reference
```
@misc{zan2023codem,
title={Can Programming Languages Boost Each Other via Instruction Tuning?},
author={Daoguang Zan and Ailun Yu and Bo Shen and Jiaxin Zhang and Taihong Chen and Bing Geng and Bei Chen and Jichuan Ji and Yafen Yao and Yongji Wang and Qianxiang Wang},
year={2023},
eprint={2308.16824},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
richardr1126/spider-context-validation-ranked-schema | 2023-09-07T22:12:48.000Z | [
"source_datasets:spider",
"language:en",
"license:cc-by-4.0",
"text-to-sql",
"SQL",
"spider",
"validation",
"eval",
"spider-eval",
"region:us"
] | richardr1126 | null | null | null | 0 | 68 | ---
language:
- en
license:
- cc-by-4.0
source_datasets:
- spider
pretty_name: Spider Context Validation Schema Ranked
tags:
- text-to-sql
- SQL
- spider
- validation
- eval
- spider-eval
dataset_info:
features:
- name: index
dtype: int32
- name: db_id
dtype: string
- name: question
dtype: string
- name: db_info
dtype: string
- name: ground_truth
dtype: string
---
# Dataset Card for Spider Context Validation
### Ranked Schema by ChatGPT
The database context used here is generated from ChatGPT after telling it to reorder the schema with the most relevant columns in the beginning of the db_info.
### Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
This dataset was created to validate spider-fine-tuned LLMs with database context.
### Yale Lily Spider Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
### Languages
The text in the dataset is in English.
### Licensing Information
The spider dataset is licensed under
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
### Citation
```
@article{yu2018spider,
title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal={arXiv preprint arXiv:1809.08887},
year={2018}
}
``` |
manu/french_podcasts | 2023-09-20T13:57:01.000Z | [
"region:us"
] | manu | null | null | null | 0 | 68 | ---
dataset_info:
features:
- name: programme_id
dtype: string
- name: programme_entry_date
dtype: string
- name: programme_rss_link
dtype: string
- name: podcast_title
dtype: string
- name: podcast_date
dtype: string
- name: podcast_duration
dtype: string
- name: audio_podcast_link
dtype: string
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 7558333
num_examples: 1401
download_size: 3696664
dataset_size: 7558333
---
# Dataset Card for "french_podcasts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SebastianMoncaleano/cammel_trainning_v3 | 2023-10-01T18:30:54.000Z | [
"region:us"
] | SebastianMoncaleano | null | null | null | 0 | 68 | Entry not found |
muchocine | 2023-01-25T14:40:54.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:unknown",
"region:us"
] | null | The Muchocine reviews dataset contains 3,872 longform movie reviews in Spanish language,
each with a shorter summary review, and a rating on a 1-5 scale. | null | null | 4 | 67 | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Muchocine
dataset_info:
features:
- name: review_body
dtype: string
- name: review_summary
dtype: string
- name: star_rating
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
splits:
- name: train
num_bytes: 11871095
num_examples: 3872
download_size: 55556703
dataset_size: 11871095
---
# Dataset Card for Muchocine
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.lsi.us.es/~fermin/index.php/Datasets
### Dataset Summary
The Muchocine reviews dataset contains 3,872 longform movie reviews in Spanish language,
each with a shorter summary review, and a rating on a 1-5 scale.
### Supported Tasks and Leaderboards
- `text-classification`: This dataset can be used for Text Classification, more precisely Sentiment Classification where the task is to predict the `star_rating` for a `reveiw_body` or a `review summaray`.
### Languages
Spanish.
## Dataset Structure
### Data Instances
An example from the train split:
```
{
'review_body': 'Zoom nos cuenta la historia de Jack Shepard, anteriormente conocido como el Capitán Zoom, Superhéroe que perdió sus poderes y que actualmente vive en el olvido. La llegada de una amenaza para la Tierra hará que la agencia del gobierno que se ocupa de estos temas acuda a él para que entrene a un grupo de jóvenes con poderes para combatir esta amenaza.Zoom es una comedia familiar, con todo lo que eso implica, es decir, guión flojo y previsible, bromas no salidas de tono, historia amorosa de por medio y un desenlace tópico. La gracia está en que los protagonistas son jóvenes con superpoderes, una producción cargada de efectos especiales y unos cuantos guiños frikis. La película además se pasa volando ya que dura poco mas de ochenta minutos y cabe destacar su prologo en forma de dibujos de comics explicando la historia de la cual partimos en la película.Tim Allen protagoniza la cinta al lado de un envejecido Chevy Chase, que hace de doctor encargado del proyecto, un papel bastante gracioso y ridículo, pero sin duda el mejor papel es el de Courteney Cox, en la piel de una científica amante de los comics y de lo más friki. Del grupito de los cuatro niños sin duda la mas graciosa es la niña pequeña con súper fuerza y la que provocara la mayor parte de los gags debido a su poder.Una comedia entretenida y poca cosa más para ver una tarde de domingo. ',
'review_summary': 'Una comedia entretenida y poca cosa más para ver una tarde de domingo ', 'star_rating': 2
}
```
### Data Fields
- `review_body` - longform review
- `review_summary` - shorter-form review
- `star_rating` - an integer star rating (1-5)
The original source also includes part-of-speech tagging for body and summary fields.
### Data Splits
One split (train) with 3,872 reviews.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Data was collected from www.muchocine.net and uploaded by Dr. Fermín L. Cruz Mata
of La Universidad de Sevilla.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The text reviews and star ratings came directly from users, so no additional annotation was needed.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Dr. Fermín L. Cruz Mata.
### Licensing Information
[More Information Needed]
### Citation Information
See http://www.lsi.us.es/~fermin/index.php/Datasets
### Contributions
Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset. |
DDSC/lcc | 2023-07-20T19:43:29.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"region:us"
] | DDSC | null | null | null | 3 | 67 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: TwitterSent
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for LCC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository**: https://github.com/fnielsen/lcc-sentiment
- **Direct Download part 1**: https://raw.githubusercontent.com/fnielsen/lcc-sentiment/master/dan_mixed_2014_10K-sentences.csv
- **Direct Download part 2**: https://raw.githubusercontent.com/fnielsen/lcc-sentiment/master/dan_newscrawl_2011_10K-sentences.csv
### Dataset Summary
This dataset consists of Danish data from [the Leipzig Collection](https://www.aclweb.org/anthology/L06-1396/) that has been annotated for sentiment analysis by Finn Årup Nielsen.
### Supported Tasks and Leaderboards
This dataset is suitable for sentiment analysis.
### Languages
This dataset is in Danish.
## Dataset Structure
### Data Instances
Every entry in the dataset has a document and an associated label.
### Data Fields
An entry in the dataset consists of the following fields:
- `text` (`str`): The text content.
- `label` (`str`): The label of the `text`. Can be "positiv", "neutral" or "negativ" for positive, neutral and negative sentiment, respectively.
### Data Splits
A `train` and `test` split is available, with the test split being 30% of the dataset, randomly sampled in a stratified fashion. There are 349 documents in the training split and 150 in the test split.
## Additional Information
### Dataset Curators
The collection and annotation of the dataset is solely due to the Finn Årup Nielsen. It was originally annotated as a score between -5 and +5, but the labels in this version have been converted to a negative, neutral and positive label.
### Licensing Information
The dataset is released under the CC BY 4.0 license.
### Citation Information
```
@misc{lcc,
title={LCC},
author={Finn Årup Nielsen},
year={2016},
note={\url{https://github.com/fnielsen/lcc-sentiment}}
}
```
### Contributions
Thanks to [@saattrupdan](https://github.com/saattrupdan) for adding this dataset to the Hugging Face Hub. |
anjandash/java-8m-methods-v2 | 2022-07-01T20:31:57.000Z | [
"multilinguality:monolingual",
"language:java",
"license:mit",
"region:us"
] | anjandash | null | null | null | 0 | 67 | ---
language:
- java
license:
- mit
multilinguality:
- monolingual
pretty_name:
- java-8m-methods-v2
--- |
ScandEval/scala-sv | 2023-07-05T09:49:04.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:sv",
"license:cc-by-sa-4.0",
"region:us"
] | ScandEval | null | null | null | 0 | 67 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- sv
size_categories:
- 1K<n<10K
--- |
jahjinx/IMDb_movie_reviews | 2023-01-08T15:47:19.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | jahjinx | null | null | null | 0 | 67 | ---
pretty_name: IMDb
task_categories:
- text-classification
task_ids:
- sentiment-classification
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
---
# Dataset Card for IMDb Movie Reviews
## Dataset Description
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Total amount of disk used:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
This is a custom train/test/validation split of the IMDb Large Movie Review Dataset available from [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
#### IMDb_movie_reviews
An example of 'train':
```
{
"text": "Beautifully photographed and ably acted, generally, but the writing is very slipshod. There are scenes of such unbelievability that there is no joy in the watching. The fact that the young lover has a twin brother, for instance, is so contrived that I groaned out loud. And the "emotion-light bulb connection" seems gimmicky, too.<br /><br />I don\'t know, though. If you have a few glasses of wine and feel like relaxing with something pretty to look at with a few flaccid comedic scenes, this is a pretty good movie. No major effort on the part of the viewer required. But Italian film, especially Italian comedy, is usually much, much better than this."
"label": 0,
}
```
### Data Fields
The data fields are the same among all splits.
#### IMDb_movie_reviews
- `text`: a `string` feature.
- `label`: a classification label, with values `neg` (0), `pos` (1).
### Data Splits
| name | train | validation | test |
|------------------|------:|-----------:|------:|
|IMDb_movie_reviews| 36000 | 4000 | 10000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
### Contributions
[More Information Needed] |
HuggingFaceGECLM/REDDIT_comments | 2023-03-17T07:52:51.000Z | [
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10B<n<100B",
"language:en",
"reddit",
"social-media",
"arxiv:2001.08435",
"region:us"
... | HuggingFaceGECLM | null | null | null | 3 | 67 | ---
dataset_info:
features:
- name: archived
dtype: string
- name: author
dtype: string
- name: author_fullname
dtype: string
- name: body
dtype: string
- name: comment_type
dtype: string
- name: controversiality
dtype: string
- name: created_utc
dtype: string
- name: edited
dtype: string
- name: gilded
dtype: string
- name: id
dtype: string
- name: link_id
dtype: string
- name: locked
dtype: string
- name: name
dtype: string
- name: parent_id
dtype: string
- name: permalink
dtype: string
- name: retrieved_on
dtype: string
- name: score
dtype: string
- name: subreddit_id
dtype: string
- name: subreddit_name_prefixed
dtype: string
- name: subreddit_type
dtype: string
- name: total_awards_received
dtype: string
splits:
- name: programming
num_bytes: 3466623746
num_examples: 7503347
- name: tifu
num_bytes: 4761338653
num_examples: 12738669
- name: explainlikeimfive
num_bytes: 8451732573
num_examples: 16392814
- name: WritingPrompts
num_bytes: 4651591771
num_examples: 4436210
- name: changemyview
num_bytes: 8603031915
num_examples: 11600073
- name: LifeProTips
num_bytes: 5272994396
num_examples: 12829459
- name: todayilearned
num_bytes: 22655655241
num_examples: 60199778
- name: science
num_bytes: 7069809765
num_examples: 18112884
- name: askscience
num_bytes: 3144754665
num_examples: 6286702
- name: ifyoulikeblank
num_bytes: 547200329
num_examples: 1332211
- name: Foodforthought
num_bytes: 308377128
num_examples: 567900
- name: IWantToLearn
num_bytes: 408331672
num_examples: 745543
- name: bestof
num_bytes: 2003718831
num_examples: 4347522
- name: IAmA
num_bytes: 9380094090
num_examples: 25778822
- name: socialskills
num_bytes: 1000014402
num_examples: 1842733
- name: relationship_advice
num_bytes: 22298879735
num_examples: 38937398
- name: philosophy
num_bytes: 1494947876
num_examples: 2391695
- name: YouShouldKnow
num_bytes: 1165617658
num_examples: 2639265
- name: history
num_bytes: 1457852402
num_examples: 2962043
- name: books
num_bytes: 4562689426
num_examples: 10187495
- name: Showerthoughts
num_bytes: 13259109532
num_examples: 34123213
- name: personalfinance
num_bytes: 9484869588
num_examples: 18361314
- name: buildapc
num_bytes: 9801044390
num_examples: 21761801
- name: EatCheapAndHealthy
num_bytes: 853462012
num_examples: 1821897
- name: boardgames
num_bytes: 3131627378
num_examples: 6328926
- name: malefashionadvice
num_bytes: 2928017882
num_examples: 7712258
- name: femalefashionadvice
num_bytes: 1619784736
num_examples: 3262969
- name: scifi
num_bytes: 888152056
num_examples: 2193741
- name: Fantasy
num_bytes: 2285934538
num_examples: 4566639
- name: Games
num_bytes: 10396813188
num_examples: 23373965
- name: bodyweightfitness
num_bytes: 794549854
num_examples: 1613634
- name: SkincareAddiction
num_bytes: 3421122597
num_examples: 5660550
- name: podcasts
num_bytes: 464773126
num_examples: 943266
- name: suggestmeabook
num_bytes: 1842944304
num_examples: 3492937
- name: AskHistorians
num_bytes: 2244587909
num_examples: 2714353
- name: gaming
num_bytes: 28374513722
num_examples: 85729253
- name: DIY
num_bytes: 2113533684
num_examples: 4489265
- name: sports
num_bytes: 2230129132
num_examples: 6470079
- name: space
num_bytes: 3081499208
num_examples: 7896182
- name: gadgets
num_bytes: 1683252868
num_examples: 4104833
- name: Documentaries
num_bytes: 1852644771
num_examples: 4051474
- name: GetMotivated
num_bytes: 1211761267
num_examples: 3221980
- name: UpliftingNews
num_bytes: 2003149025
num_examples: 4741948
- name: technology
num_bytes: 10826871436
num_examples: 25404699
- name: Fitness
num_bytes: 6191132755
num_examples: 14319856
- name: travel
num_bytes: 1740556350
num_examples: 3806755
- name: lifehacks
num_bytes: 626791812
num_examples: 1799437
- name: Damnthatsinteresting
num_bytes: 6376694618
num_examples: 15643554
- name: gardening
num_bytes: 1825313940
num_examples: 4568468
- name: mildlyinteresting
num_bytes: 9079894206
num_examples: 26436769
download_size: 109177016105
dataset_size: 255339788158
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: Reddit comments
size_categories:
- 10B<n<100B
source_datasets: []
tags:
- reddit
- social-media
task_categories:
- text-generation
task_ids:
- dialogue-modeling
- language-modeling
---
# Dataset Card for "REDDIT_comments"
## Dataset Description
- **Homepage:**
- **Paper: https://arxiv.org/abs/2001.08435**
### Dataset Summary
Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
### Supported Tasks
These comments can be used for text generation and language modeling, as well as dialogue modeling.
## Dataset Structure
### Data Splits
Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming"
## Dataset Creation
### Curation Rationale
All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "archived", "author", "author_fullname", "body", "comment_type", "controversiality", "created_utc", "edited", "gilded", "id", "link_id", "locked", "name", "parent_id", "permalink", "retrieved_on", "score", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_type", "total_awards_received".
### Source Data
The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
#### Initial Data Collection and Normalization
See the paper.
#### Who are the source language producers?
Redditors are mostly young (65% below 30), male (70%), and American (50% of the site).
### Personal and Sensitive Information
The data contains Redditor's usernames associated to their content.
## Considerations for Using the Data
This dataset should be anonymized before any processing.
Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity.
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. |
NbAiLab/norwegian-alpaca | 2023-07-25T15:05:00.000Z | [
"task_categories:text-generation",
"language:no",
"language:nb",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] | NbAiLab | null | null | null | 7 | 67 | ---
license: cc-by-4.0
language:
- 'no'
- nb
tags:
- instruction-finetuning
pretty_name: NB Alpaca Norwegian Bokmål
task_categories:
- text-generation
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: instruction_en
dtype: string
- name: input_en
dtype: string
- name: output_en
dtype: string
splits:
- name: train
num_bytes: 38067492
num_examples: 51942
download_size: 24204487
dataset_size: 38067492
---
# NB Alpaca Norwegian Bokmål
This dataset is a translation to Norwegian Bokmål of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json), a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca).
An [earlier version](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca/tree/main/nllb) used [Facebook's NLLB 1.3B model](https://huggingface.co/facebook/nllb-200-1.3B), but the current version uses OpenAI's `gpt-3.5-turbo`, hence this dataset cannot be used to create models that compete in any way against OpenAI. |
YuanPJ/summ_screen | 2023-03-29T04:51:45.000Z | [
"region:us"
] | YuanPJ | SummScreen Corpus contains over 26k pairs of TV series transcripts and human written recaps.
There are two features:
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: id of a example. | @inproceedings{chen-etal-2022-summscreen,
title = "{S}umm{S}creen: A Dataset for Abstractive Screenplay Summarization",
author = "Chen, Mingda and
Chu, Zewei and
Wiseman, Sam and
Gimpel, Kevin",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.589",
pages = "8602--8615",
abstract = "We introduce SummScreen, a summarization dataset comprised of pairs of TV series transcripts and human written recaps. The dataset provides a challenging testbed for abstractive summarization for several reasons. Plot details are often expressed indirectly in character dialogues and may be scattered across the entirety of the transcript. These details must be found and integrated to form the succinct plot descriptions in the recaps. Also, TV scripts contain content that does not directly pertain to the central plot but rather serves to develop characters or provide comic relief. This information is rarely contained in recaps. Since characters are fundamental to TV series, we also propose two entity-centric evaluation metrics. Empirically, we characterize the dataset by evaluating several methods, including neural models and those based on nearest neighbors. An oracle extractive approach outperforms all benchmarked models according to automatic metrics, showing that the neural models are unable to fully exploit the input transcripts. Human evaluation and qualitative analysis reveal that our non-oracle models are competitive with their oracle counterparts in terms of generating faithful plot events and can benefit from better content selectors. Both oracle and non-oracle models generate unfaithful facts, suggesting future research directions.",
} | null | 0 | 67 | Entry not found |
BelleGroup/train_1M_CN | 2023-04-03T08:23:17.000Z | [
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | BelleGroup | null | null | null | 104 | 67 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
## 内容
包含约100万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n",
"input": "",
"output": "“明天的会议在10点开始,记得准时到达。”"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
|
metaeval/race-c | 2023-05-31T08:39:38.000Z | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"language:en",
"region:us"
] | metaeval | null | null | null | 0 | 67 | ---
task_categories:
- question-answering
- multiple-choice
language:
- en
---
Race-C : additional data for race (high school/middle school) but for college level
https://github.com/mrcdata/race-c
```bib
@InProceedings{pmlr-v101-liang19a,
title={A New Multi-choice Reading Comprehension Dataset for Curriculum Learning},
author={Liang, Yichan and Li, Jianheng and Yin, Jian},
booktitle={Proceedings of The Eleventh Asian Conference on Machine Learning},
pages={742--757},
year={2019}
}
``` |
mstz/twonorm | 2023-04-07T14:58:58.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"twonorm",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | null | 0 | 67 | ---
language:
- en
tags:
- twonorm
- tabular_classification
- binary_classification
pretty_name: Two Norm
size_categories:
- 1K<n<10K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- 8hr
- 1hr
---
# TwoNorm
The [TwoNorm dataset](https://www.openml.org/search?type=data&status=active&id=1507) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| twonorm | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/twonorm")["train"]
```
|
distil-whisper/peoples_speech-clean | 2023-09-25T10:30:13.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | distil-whisper | The People's Speech is a free-to-download 30,000-hour and growing supervised
conversational English speech recognition dataset licensed for academic and
commercial usage under CC-BY-SA (with a CC-BY subset). | @article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos and
Juan Ciro and
Juan Felipe Ceron and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: A Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 67 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: People's Speech Clean
---
# Distil Whisper: People's Speech Clean
This is a variant of the [People's Speech Clean](https://huggingface.co/datasets/MLCommons/peoples_speech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/MLCommons/peoples_speech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.