id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
HuggingFaceH4/helpful-instructions | 2023-02-20T08:58:24.000Z | [
"license:apache-2.0",
"human-feedback",
"region:us"
] | HuggingFaceH4 | null | null | null | 5 | 6 | ---
license: apache-2.0
tags:
- human-feedback
pretty_name: Helpful Instructions
---
# Dataset Card for Helpful Instructions
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Lewis Tunstall**
### Dataset Summary
Helpful Instructions is a dataset of `(instruction, demonstration)` pairs that are derived from public datasets. As the name suggests, it focuses on instructions that are "helpful", i.e. the kind of questions or tasks a human user might instruct an AI assistant to perform. You can load the dataset as follows:
```python
from datasets import load_dataset
# Load all subsets
helpful_instructions = load_dataset("HuggingFaceH4/helpful_instructions")
# Load a single subset
helpful_instructions_subset = load_dataset("HuggingFaceH4/helpful_instructions", data_dir="data/helpful-anthropic-raw")
```
### Supported Tasks and Leaderboards
This dataset can be used to fine-tune pretrained language models to follow instructions.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
jed351/rthk_news | 2023-02-16T17:24:50.000Z | [
"language:zh",
"region:us"
] | jed351 | null | null | null | 3 | 6 | ---
language:
- zh
---
### RTHK News Dataset
(RTHK)[https://www.rthk.hk/] is a public broadcasting service under the Hong Kong Government according to (Wikipedia)[https://en.wikipedia.org/wiki/RTHK]
This dataset at the moment is obtained from exporting messages from their (telegram channel)[https://t.me/rthk_new_c],
which contains news since April 2018.
I will update this dataset with more data in the future. |
Loie/VGGSound | 2023-03-26T13:25:40.000Z | [
"task_categories:audio-classification",
"size_categories:100B<n<1T",
"arxiv:2004.14368",
"region:us"
] | Loie | null | null | null | 5 | 6 | ---
task_categories:
- audio-classification
size_categories:
- 100B<n<1T
---
# VGGSound
VGG-Sound is an audio-visual correspondent dataset consisting of short clips of audio sounds, extracted from videos uploaded to YouTube.
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/vggsound/
- **Paper:** https://arxiv.org/abs/2004.14368
- **Github:** https://github.com/hche11/VGGSound
## Analysis
- **310+ classes:** VGG-Sound contains audios spanning a large number of challenging acoustic environments and noise characteristics of real applications.
- **200,000+ videos:** All videos are captured "in the wild" with audio-visual correspondence in the sense that the sound source is visually evident.
- **550+ hours:** VGG-Sound consists of both audio and video. Each segment is 10 seconds long.

## Download
We provide a csv file. For each YouTube video, we provide YouTube URLs, time stamps, audio labels and train/test split. Each line in the csv file has columns defined by here.
```
# YouTube ID, start seconds, label, train/test split.
```
And you can download VGGSound directly from this [repository](https://huggingface.co/datasets/Loie/VGGSound/tree/main).
## License
The VGG-Sound dataset is available to download for commercial/research purposes under a Creative Commons Attribution 4.0 International License. The copyright remains with the original owners of the video. A complete version of the license can be found [here](https://thor.robots.ox.ac.uk/datasets/vggsound/license_vggsound.txt).
## Citation
Please cite the following if you make use of the dataset.
```
@InProceedings{Chen20,
author = "Honglie Chen and Weidi Xie and Andrea Vedaldi and Andrew Zisserman",
title = "VGGSound: A Large-scale Audio-Visual Dataset",
booktitle = "International Conference on Acoustics, Speech, and Signal Processing (ICASSP)",
year = "2020",
}
``` |
amcoff/skolmat | 2023-02-22T20:00:20.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:sv",
"license:mit",
"region:us"
] | amcoff | null | null | null | 0 | 6 | ---
annotations_creators:
- expert-generated
language:
- sv
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: Skolmat
size_categories: []
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for Skolmat
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
readerbench/ro-offense | 2023-08-08T10:48:15.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:readerbench/ro-offense",
"language:ro",
"license:apache-2.0",
"hate-speech-detect... | readerbench | null | null | null | 0 | 6 | ---
license: apache-2.0
annotations_creators:
- expert-generated
language_creators:
- found
task_categories:
- text-classification
language:
- ro
multilinguality:
- monolingual
source_datasets:
- readerbench/ro-offense
tags:
- hate-speech-detection
- offensive speech
- romanian
- nlp
task_ids:
- hate-speech-detection
pretty_name: RO-Offense-Sequences
size_categories:
- 1K<n<10K
extra_gated_prompt: 'Warning: this repository contains harmful content (abusive language,
hate speech).'
configs:
- config_name: default
data_files:
- split: train
path: "train.csv"
- split: test
path: "test.csv"
- config_name: ner
data_files:
- split: train
path: "train_ner.csv"
- split: test
path: "test_ner.csv"
---
# Dataset Card for "RO-Offense-Sequences"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
<!--
- **Paper:** News-RO-Offense - A Romanian Offensive Language Dataset and Baseline Models Centered on News Article Comments
-->
- **Homepage:** [https://github.com/readerbench/ro-offense-sequences](https://github.com/readerbench/ro-offense-sequences)
- **Repository:** [https://github.com/readerbench/ro-offense-sequences](https://github.com/readerbench/ro-offense-sequences)
- **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
-
### Dataset Summary
a novel Romanian language dataset for offensive language detection with manually
annotated offensive labels from a local Romanian sports news website (gsp.ro):
Resulting in 12,445 annotated messages
### Languages
Romanian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'id': 5,
'text':'PLACEHOLDER TEXT',
'label': 'OTHER'
}
```
### Data Fields
- `id`: The unique comment ID, corresponding to the ID in [RO Offense](https://huggingface.co/datasets/readerbench/ro-offense)
- `text`: full comment text
- `label`: the type of offensive message (OTHER, PROFANITY, INSULT, ABUSE)
### Data Splits
Train | Other | Profanity | Insult | Abuse
:---| :---| :---| :---| :---:
9953 | 3656 | 1293 | 2236 | 2768
Test | Other | Profanity | Insult | Abuse
:---| :---| :---| :---| :---:
2492 | 916 | 324 | 559 | 693
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification for Romanian Language.
For the labeling of texts we loosely base our definitions on the Germeval 2019 task for detecting offensive language in german tweets (Struß et al., 2019)
Data source: Comments on articles in Gazeta Sporturilor (gsp.ro) between 2011 and 2020
Selection for annotation: we select comments from a pool of secific articles based on the number of comments in the article.
The number of comments per article has the following distribution:
```
mean 183.820923
std 334.707177
min 1.000000
25% 20.000000
50% 58.000000
75% 179.000000
max 2151.000000
```
Based on this we select only comments from articles having between 20 and 50 comments. Also, we remove comments containing urls or three consecutive *, since these were mostly censored by editors or automatic profanity detection algorythms.
Additional, in order to have some meaningful messages for annotation, we select only messages with length between 50 and 500 characters.
### Source Data
Sports News Articles comments
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Sports News Article readers
### Annotations
- Andrei Paraschiv
- Irina Maria Sandu
#### Annotation process
##### OTHER
Label used for non offensive texts.
##### PROFANITY
This is the "lighter" form of abusive language. When profane words are used without a direct intend on offending a target, or without ascribing some negative qualities to a target we use this label. Some messages in this class may even have a positive sentiment and uses swearwords as emphasis. Messages containing profane words that are not directed towards a specific group or person, we label as **PROFANITY**
Also, self censored messages with swear words having some letters hidden, or some deceitful misspellings of swearwords that have clear intend on circumventing profanity detectors will be treated as **PROFANITY**.
##### INSULT
The message clearly wants to offend someone, ascribing negatively evaluated qualities or deficiences, labeling a person or a group of persons as unworthy or unvalued. Insults do imply disrespect and contempt directed towards a target.
##### ABUSE
This label marks messages containing the stronger form of offensive and abusive language. This type of language ascribes the target a social identity that is judged negatively by the majority of society, or at least is percieved as a mostly negative judged identity. Shameful, unworthy or morally unaceptable identytities fall in this category. In contrast to insults, instances of abusive language require that the target of judgment is seen as a representative of a group and it is ascribed negative qualities that are taken to be universal, omnipresent and unchangeable characteristics of the group.
In contrast to insults, instances of abusive language require that the target of judgment tis seen as a representative of a group and it is ascribed negative qualities that are taken to be universal, omnipresent and unchangeable characteristics of the group.
Additional, dehumanizing language targeting a person or group is also classified as ABUSE.
#### Who are the annotators?
Native speakers
### Personal and Sensitive Information
The data was public at the time of collection. PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
This data is available and distributed under Apache-2.0 license
### Citation Information
```
tbd
```
### Contributions |
Riksarkivet/mini_raw_diachronic_swe | 2023-03-13T11:39:53.000Z | [
"size_categories:1M<n<10M",
"language:sv",
"license:mit",
"historical",
"WIP",
"region:us"
] | Riksarkivet | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 796312222
num_examples: 4760470
download_size: 475243460
dataset_size:
license: mit
language:
- sv
tags:
- historical
- WIP
pretty_name: Kbuhist2
size_categories:
- 1M<n<10M
---
# Dataset Card for mini_raw_diachronic_swe
The Swedish Diachronic Corpus is a project funded by [Swe-Clarin](https://sweclarin.se/eng) and provides a corpus of texts covering the time period from Old Swedish.
### Data Splits
**This will be further extended!**
* Number of instances in split: 4760470
## Acknowledgements
We gratefully acknowledge [SWE-clarin](https://sweclarin.se/) for the datasets.
## Citation Information
Eva Pettersson and Lars Borin (2022)
Swedish Diachronic Corpus
In Darja Fišer & Andreas Witt (eds.), CLARIN. The Infrastructure for Language Resources. Berlin: deGruyter. https://degruyter.com/document/doi/10.1515/9783110767377-022/html |
vietgpt/wikivoyage_en | 2023-03-30T18:39:38.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"LM",
"region:us"
] | vietgpt | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 240563228
num_examples: 24838
download_size: 148244766
dataset_size: 240563228
task_categories:
- text-generation
language:
- en
tags:
- LM
size_categories:
- 10K<n<100K
---
# wikivoyage_filtered
- Source: https://huggingface.co/datasets/bigscience-data/roots_en_wikivoyage
- Num examples: 24,838
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikivoyage_en")
``` |
KonradSzafer/stackoverflow_linux | 2023-03-04T23:23:28.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"region:us"
] | KonradSzafer | null | null | null | 1 | 6 | ---
dataset_info:
features:
- name: title
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 303464
num_examples: 270
- name: test
num_bytes: 37456
num_examples: 30
download_size: 172425
dataset_size: 340920
task_categories:
- question-answering
language:
- en
pretty_name: Stack Overflow Linux
size_categories:
- n<1K
---
# Dataset Card for "stackoverflow_linux"
Dataset information:
- Source: Stack Overflow
- Category: Linux
- Number of samples: 300
- Train/Test split: 270/30
- Quality: Data come from the top 1k most upvoted questions
## Additional Information
### License
All Stack Overflow user contributions are licensed under CC-BY-SA 3.0 with attribution required.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
filevich/uy22 | 2023-02-28T01:50:29.000Z | [
"language:es",
"license:mit",
"region:us"
] | filevich | null | null | null | 1 | 6 | ---
license: mit
language:
- es
pretty_name: uy22
--- |
HuggingFaceH4/instruct_me | 2023-03-06T08:36:03.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"human-feedback",
"instruct",
"reward-modeling",
"region:us"
] | HuggingFaceH4 | Instruct Me is a dataset of instruction-like dialogues between a human user and AI assistant. The prompts are derived from (prompt, completion) pairs in the Helpful Instructions dataset. The goal is to train a language model to that is "chatty" and can answer the kind of questions or tasks a human user might instruct an AI assistant to perform. | """
_DESCRIPTION = | null | 14 | 6 | ---
license: apache-2.0
dataset_info:
- config_name: instruction_tuning
features:
- name: text
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 29975565
num_examples: 41685
- name: test
num_bytes: 3298059
num_examples: 4632
download_size: 18425612
dataset_size: 33273624
- config_name: reward_modelling
features:
- name: text
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 25274204
num_examples: 41685
- name: test
num_bytes: 2777314
num_examples: 4632
download_size: 15636566
dataset_size: 28051518
- config_name: ppo
features:
- name: prompt
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 50787070
num_examples: 83371
- name: test
num_bytes: 5715727
num_examples: 9264
download_size: 31461165
dataset_size: 56502797
- config_name: reward_modeling
features:
- name: prompt
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 25274204
num_examples: 41685
- name: test
num_bytes: 2777314
num_examples: 4632
download_size: 15636838
dataset_size: 28051518
task_categories:
- conversational
- text-generation
language:
- en
tags:
- human-feedback
- instruct
- reward-modeling
pretty_name: Instruct Me
---
# Dataset card for Instruct Me
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Lewis Tunstall
### Dataset summary
Instruct Me is a dataset of prompts and instruction dialogues between a human user and AI assistant. The prompts are derived from (prompt, completion) pairs in the [Helpful Instructions dataset](https://huggingface.co/datasets/HuggingFaceH4/helpful_instructions). The goal is to train a language model to that is "chatty" and can answer the kind of questions or tasks a human user might instruct an AI assistant to perform.
### Supported Tasks and Leaderboard
We provide 3 configs that can be used for training RLHF models:
#### instruction_tuning
Single-turn user/bot dialogues for instruction tuning.
#### reward_modeling
Prompts to generate model completions and collect human preference data
#### ppo
Prompts to generate model completions for optimization of the instruction-tuned model with techniques like PPO.
### Changelog
* March 6, 2023: `v1.1.0` release. Changed the `text` columns for the `reward_modeling` and `ppo` configs to `prompt` for consistency with our dataset schemas elsewhere.
* March 5, 2023: `v1.0.0` release. |
kanishka/comps | 2023-09-16T15:09:24.000Z | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2210.01963",
"region:us"
] | kanishka | COMPS is a dataset of minimal pair sentences in English that enables the
testing knowledge of concepts and their properties in language models (LMs).
Specifically, it tests the ability of LMs to attribute properties to everyday
concepts, and demonstrate reasoning compatible with property inheritance, where
subordinate concepts inherit the properties of their superordinate (hypernyms). | @article{misra2022comps,
title={COMPS: Conceptual Minimal Pair Sentences for testing Property Knowledge and Inheritance in Pre-trained Language Models},
author={Misra, Kanishka and Rayz, Julia Taylor and Ettinger, Allyson},
journal={arXiv preprint arXiv:2210.01963},
year={2022}
} | null | 1 | 6 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license: apache-2.0
multilinguality:
- monolingual
pretty_name: COMPS
size_categories:
- 10K<n<100K
source_datasets:
- original
---
# Dataset Card for "COMPS"
## Dataset Description
COMPS is a dataset of minimal pair sentences in English that enables the
testing knowledge of concepts and their properties in language models (LMs).
Specifically, it tests the ability of LMs to attribute properties to everyday
concepts, and demonstrate reasoning compatible with property inheritance, where
subordinate concepts inherit the properties of their superordinate (hypernyms).
- **Homepage:** [https://github.com/kanishkamisra/comps/](https://github.com/kanishkamisra/comps/)
- **Repository:** [https://github.com/kanishkamisra/comps/](https://github.com/kanishkamisra/comps/)
- **Paper:** [arxiv](https://arxiv.org/abs/2210.01963)
- **Point of Contact:** [Kanishka Misra] (https://kanishka.website)
### Citation Information
```
@inproceedings{misra-etal-2023-comps,
title = "{COMPS}: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models",
author = "Misra, Kanishka and
Rayz, Julia and
Ettinger, Allyson",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.eacl-main.213",
doi = "10.18653/v1/2023.eacl-main.213",
pages = "2928--2949",
abstract = "A characteristic feature of human semantic cognition is its ability to not only store and retrieve the properties of concepts observed through experience, but to also facilitate the inheritance of properties (can breathe) from superordinate concepts (animal) to their subordinates (dog){---}i.e. demonstrate property inheritance. In this paper, we present COMPS, a collection of minimal pair sentences that jointly tests pre-trained language models (PLMs) on their ability to attribute properties to concepts and their ability to demonstrate property inheritance behavior. Analyses of 22 different PLMs on COMPS reveal that they can easily distinguish between concepts on the basis of a property when they are trivially different, but find it relatively difficult when concepts are related on the basis of nuanced knowledge representations. Furthermore, we find that PLMs can show behaviors suggesting successful property inheritance in simple contexts, but fail in the presence of distracting information, which decreases the performance of many models sometimes even below chance. This lack of robustness in demonstrating simple reasoning raises important questions about PLMs{'} capacity to make correct inferences even when they appear to possess the prerequisite knowledge.",
}
```
|
daviddaubner/misinformation-detection | 2023-03-09T17:06:23.000Z | [
"license:unknown",
"region:us"
] | daviddaubner | null | null | null | 0 | 6 | ---
license: unknown
---
|
pnadel/latin_sentences | 2023-03-07T16:08:13.000Z | [
"region:us"
] | pnadel | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: f_name
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 39199112.23995617
num_examples: 170421
- name: test
num_bytes: 13066600.760043832
num_examples: 56808
download_size: 25166966
dataset_size: 52265713.0
---
# Dataset Card for "latin_sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gjuggler/bird-data | 2023-03-11T14:49:34.000Z | [
"task_categories:image-classification",
"language:en",
"license:creativeml-openrail-m",
"biology",
"region:us"
] | gjuggler | We worked with citizen scientists and domainexperts to collect NABirds, a new high
quality dataset containing 48,562 images of North American birds with 555
categories, part annotations and bounding boxes. | @MISC{Van_Horn_undated-kj,
title = "Building a bird recognition app and large scale dataset with citizen
scientists: The fine print in fine-grained dataset collection",
author = "Van Horn, Grant and Branson, Steve and Farrell, Ryan and Haber,
Scott and Barry, Jessie and Ipeirotis, Panos and Perona, Pietro and
Belongie, Serge and Lab Of Ornithology, Cornell and Tech, Cornell"
} | null | 1 | 6 | ---
license: creativeml-openrail-m
dataset_info:
features:
- name: image_file_path
dtype: string
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
'0': Little Blue Heron
'1': Swainson's Hawk
'2': Glaucous-winged Gull
'3': Spotted Towhee
'4': Neotropic Cormorant
'5': White-eyed Vireo
'6': Tundra Swan
'7': Costa's Hummingbird
'8': American Crow
'9': American Tree Sparrow
'10': Savannah Sparrow
'11': Verdin
'12': Wild Turkey
'13': Rufous Hummingbird
'14': Blue-gray Gnatcatcher
'15': Song Sparrow
'16': Tricolored Heron
'17': Phainopepla
'18': Harlequin Duck
'19': Florida Scrub-Jay
'20': Black-billed Cuckoo
'21': Laughing Gull
'22': Lesser Goldfinch
'23': Common Tern
'24': Tree Swallow
'25': Black-billed Magpie
'26': Surf Scoter
'27': Black-and-white Warbler
'28': Mountain Chickadee
'29': California Thrasher
'30': Osprey
'31': Long-tailed Duck
'32': Semipalmated Plover
'33': Reddish Egret
'34': Black Guillemot
'35': Ring-billed Gull
'36': American Avocet
'37': White-faced Ibis
'38': Western Tanager
'39': Black-bellied Plover
'40': Winter Wren
'41': Mississippi Kite
'42': Townsend's Solitaire
'43': Bonaparte's Gull
'44': Cassin's Finch
'45': Yellow-rumped Warbler
'46': Great Black-backed Gull
'47': Red-naped Sapsucker
'48': Swamp Sparrow
'49': Western Screech-Owl
'50': Rusty Blackbird
'51': Northern Saw-whet Owl
'52': Plumbeous Vireo
'53': Bushtit
'54': White-tailed Kite
'55': White Ibis
'56': Ovenbird
'57': Cactus Wren
'58': Fish Crow
'59': Greater Scaup
'60': Pacific Loon
'61': Red-breasted Sapsucker
'62': Pied-billed Grebe
'63': Eastern Towhee
'64': Acorn Woodpecker
'65': Mourning Dove
'66': Red-bellied Woodpecker
'67': Eastern Wood-Pewee
'68': Northern Mockingbird
'69': Red Crossbill
'70': Wood Stork
'71': Pine Siskin
'72': Pacific Wren
'73': Barrow's Goldeneye
'74': American White Pelican
'75': Cordilleran Flycatcher
'76': Eastern Meadowlark
'77': Yellow-headed Blackbird
'78': Chipping Sparrow
'79': Common Grackle
'80': American Dipper
'81': Double-crested Cormorant
'82': Black Phoebe
'83': Surfbird
'84': Loggerhead Shrike
'85': Gila Woodpecker
'86': Snow Bunting
'87': Field Sparrow
'88': Brown Pelican
'89': Merlin
'90': Golden Eagle
'91': Turkey Vulture
'92': American Wigeon
'93': Black Turnstone
'94': Swainson's Thrush
'95': White-winged Crossbill
'96': Oak Titmouse
'97': Least Flycatcher
'98': Brown-headed Cowbird
'99': Horned Grebe
'100': Canvasback
'101': Yellow-breasted Chat
'102': Pine Warbler
'103': Bald Eagle
'104': Downy Woodpecker
'105': Black-chinned Hummingbird
'106': Prothonotary Warbler
'107': Allen's Hummingbird
'108': Louisiana Waterthrush
'109': Gray Catbird
'110': Western Meadowlark
'111': House Finch
'112': Brown Thrasher
'113': Common Goldeneye
'114': Hoary Redpoll
'115': Eastern Kingbird
'116': Evening Grosbeak
'117': Mexican Jay
'118': Mute Swan
'119': Indigo Bunting
'120': Brewer's Sparrow
'121': American Goldfinch
'122': Red-headed Woodpecker
'123': Bell's Vireo
'124': White-winged Scoter
'125': Sandhill Crane
'126': Boat-tailed Grackle
'127': Scissor-tailed Flycatcher
'128': Great-tailed Grackle
'129': Common Merganser
'130': Marsh Wren
'131': Western Wood-Pewee
'132': Barred Owl
'133': Canada Warbler
'134': Common Nighthawk
'135': Long-billed Curlew
'136': Scaled Quail
'137': Western Sandpiper
'138': Ruby-crowned Kinglet
'139': Yellow-bellied Sapsucker
'140': Killdeer
'141': Chestnut-backed Chickadee
'142': Belted Kingfisher
'143': Blackpoll Warbler
'144': Purple Gallinule
'145': American Robin
'146': Solitary Sandpiper
'147': Chihuahuan Raven
'148': Yellow-billed Magpie
'149': Black Tern
'150': House Sparrow
'151': Rufous-crowned Sparrow
'152': Ring-necked Duck
'153': Warbling Vireo
'154': Red-shouldered Hawk
'155': Northern Harrier
'156': Bay-breasted Warbler
'157': Great Cormorant
'158': Rock Pigeon
'159': Short-billed Dowitcher
'160': Bronzed Cowbird
'161': Hooded Warbler
'162': Black Vulture
'163': White-breasted Nuthatch
'164': Lincoln's Sparrow
'165': Whimbrel
'166': Varied Thrush
'167': Dickcissel
'168': Snowy Owl
'169': Bank Swallow
'170': Veery
'171': Northern Waterthrush
'172': Bridled Titmouse
'173': Semipalmated Sandpiper
'174': Harris's Hawk
'175': Northern Rough-winged Swallow
'176': Northern Pintail
'177': Pelagic Cormorant
'178': Clark's Grebe
'179': Broad-winged Hawk
'180': Swallow-tailed Kite
'181': Monk Parakeet
'182': Blackburnian Warbler
'183': Burrowing Owl
'184': Cooper's Hawk
'185': Black Skimmer
'186': Forster's Tern
'187': Black-crested Titmouse
'188': Northwestern Crow
'189': Wood Thrush
'190': Blue Jay
'191': Dunlin
'192': Yellow-billed Cuckoo
'193': Black-throated Blue Warbler
'194': Carolina Chickadee
'195': Gadwall
'196': Nuttall's Woodpecker
'197': Common Gallinule
'198': Wilson's Snipe
'199': Greater White-fronted Goose
'200': Glossy Ibis
'201': Brant
'202': Common Ground-Dove
'203': Band-tailed Pigeon
'204': Marbled Godwit
'205': American Redstart
'206': Clay-colored Sparrow
'207': American Coot
'208': American Pipit
'209': Cackling Goose
'210': Northern Shrike
'211': Ruddy Duck
'212': Red-necked Grebe
'213': Ross's Goose
'214': Townsend's Warbler
'215': American Kestrel
'216': Royal Tern
'217': Sharp-shinned Hawk
'218': Black-legged Kittiwake
'219': Pileated Woodpecker
'220': Hermit Thrush
'221': Northern Gannet
'222': Western Kingbird
'223': Green-tailed Towhee
'224': Pine Grosbeak
'225': Harris's Sparrow
'226': Bullock's Oriole
'227': Brown-headed Nuthatch
'228': Cinnamon Teal
'229': Eastern Phoebe
'230': Gambel's Quail
'231': Nashville Warbler
'232': Baltimore Oriole
'233': Eastern Screech-Owl
'234': American Oystercatcher
'235': Ash-throated Flycatcher
'236': Inca Dove
'237': Anna's Hummingbird
'238': Black-headed Grosbeak
'239': Canada Goose
'240': Ruby-throated Hummingbird
'241': California Quail
'242': American Woodcock
'243': Spotted Sandpiper
'244': Blue-headed Vireo
'245': Wood Duck
'246': Summer Tanager
'247': Black-capped Chickadee
'248': Black-tailed Gnatcatcher
'249': Juniper Titmouse
'250': Red-throated Loon
'251': White-throated Sparrow
'252': Pacific-slope Flycatcher
'253': Brown-capped Rosy-Finch
'254': Canyon Wren
'255': Say's Phoebe
'256': Blue-winged Warbler
'257': Abert's Towhee
'258': Greater Yellowlegs
'259': Lazuli Bunting
'260': Red-breasted Nuthatch
'261': Carolina Wren
'262': Red-eyed Vireo
'263': Yellow-throated Vireo
'264': Least Sandpiper
'265': Roseate Spoonbill
'266': Mallard
'267': Vesper Sparrow
'268': Common Redpoll
'269': Heermann's Gull
'270': Broad-tailed Hummingbird
'271': Snowy Egret
'272': Barn Swallow
'273': Vermilion Flycatcher
'274': Rose-breasted Grosbeak
'275': Dark-eyed Junco
'276': Crested Caracara
'277': Gray Jay
'278': Purple Martin
'279': Magnolia Warbler
'280': Orange-crowned Warbler
'281': Broad-billed Hummingbird
'282': Painted Bunting
'283': American Black Duck
'284': Vaux's Swift
'285': Northern Bobwhite
'286': Black-throated Gray Warbler
'287': Red-winged Blackbird
'288': Black-crowned Night-Heron
'289': California Gull
'290': Common Raven
'291': Brewer's Blackbird
'292': Purple Finch
'293': Northern Cardinal
'294': Western Scrub-Jay
'295': Western Bluebird
'296': Northern Parula
'297': Northern Pygmy-Owl
'298': Palm Warbler
'299': Violet-green Swallow
'300': Great Crested Flycatcher
'301': Rough-legged Hawk
'302': Tufted Titmouse
'303': MacGillivray's Warbler
'304': Lark Bunting
'305': Orchard Oriole
'306': Bufflehead
'307': Black Oystercatcher
'308': Great Egret
'309': Redhead
'310': Blue-winged Teal
'311': Curve-billed Thrasher
'312': Scarlet Tanager
'313': Horned Lark
'314': Brandt's Cormorant
'315': White-crowned Sparrow
'316': House Wren
'317': Chimney Swift
'318': Black-necked Stilt
'319': Yellow Warbler
'320': Pygmy Nuthatch
'321': Gray-crowned Rosy-Finch
'322': Hutton's Vireo
'323': Hooded Merganser
'324': Western Grebe
'325': Canyon Towhee
'326': Ladder-backed Woodpecker
'327': Bobolink
'328': Golden-fronted Woodpecker
'329': Prairie Falcon
'330': Black-throated Green Warbler
'331': Greater Roadrunner
'332': Cedar Waxwing
'333': Blue Grosbeak
'334': Mew Gull
'335': White-throated Swift
'336': Red-breasted Merganser
'337': Cassin's Kingbird
'338': Green Heron
'339': Eastern Bluebird
'340': Eared Grebe
'341': Fox Sparrow
'342': Pigeon Guillemot
'343': Black-bellied Whistling-Duck
'344': Willet
'345': Mountain Bluebird
'346': Clark's Nutcracker
'347': Northern Flicker
'348': Bewick's Wren
'349': Prairie Warbler
'350': Anhinga
'351': Ruffed Grouse
'352': Northern Shoveler
'353': Common Loon
'354': Bohemian Waxwing
'355': Peregrine Falcon
'356': Snow Goose
'357': Lesser Scaup
'358': Golden-crowned Kinglet
'359': Great Blue Heron
'360': Ruddy Turnstone
'361': Western Gull
'362': Hairy Woodpecker
'363': Black Scoter
'364': Common Yellowthroat
'365': Boreal Chickadee
'366': Cave Swallow
'367': Mottled Duck
'368': Yellow-crowned Night-Heron
'369': Wilson's Phalarope
'370': Pyrrhuloxia
'371': Sanderling
'372': Tennessee Warbler
'373': Cliff Swallow
'374': Lark Sparrow
'375': Ring-necked Pheasant
'376': Great Horned Owl
'377': Hermit Warbler
'378': Yellow-throated Warbler
'379': Eurasian Collared-Dove
'380': Mourning Warbler
'381': Cassin's Vireo
'382': Cattle Egret
'383': Cape May Warbler
'384': European Starling
'385': Black Rosy-Finch
'386': White-winged Dove
'387': Common Eider
'388': Calliope Hummingbird
'389': Lesser Yellowlegs
'390': Golden-crowned Sparrow
'391': Brown Creeper
'392': Green-winged Teal
'393': Red-tailed Hawk
'394': Hooded Oriole
'395': Caspian Tern
'396': Trumpeter Swan
'397': California Towhee
'398': Wrentit
'399': Chestnut-sided Warbler
'400': Wilson's Warbler
'401': Barn Owl
'402': Herring Gull
'403': Steller's Jay
splits:
- name: train
num_bytes: 9106091
num_examples: 23912
- name: test
num_bytes: 9374111
num_examples: 24615
download_size: 9877722099
dataset_size: 18480202
task_categories:
- image-classification
language:
- en
tags:
- biology
--- |
webnlg/challenge-2023 | 2023-03-10T11:22:40.000Z | [
"task_categories:tabular-to-text",
"task_ids:rdf-to-text",
"annotations_creators:found",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-db_pedia",
"source_datasets:original",
"language:br",
"language:cy",
"language:... | webnlg | The WebNLG challenge consists in mapping data to text. The training data consists
of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation
of these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).
a. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot)
b. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot
As the example illustrates, the task involves specific NLG subtasks such as sentence segmentation
(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),
aggregation (how to avoid repetitions) and surface realisation
(how to build a syntactically correct and natural sounding text). | @inproceedings{web_nlg,
author = {Claire Gardent and
Anastasia Shimorina and
Shashi Narayan and
Laura Perez{-}Beltrachini},
editor = {Regina Barzilay and
Min{-}Yen Kan},
title = {Creating Training Corpora for {NLG} Micro-Planners},
booktitle = {Proceedings of the 55th Annual Meeting of the
Association for Computational Linguistics,
{ACL} 2017, Vancouver, Canada, July 30 - August 4,
Volume 1: Long Papers},
pages = {179--188},
publisher = {Association for Computational Linguistics},
year = {2017},
url = {https://doi.org/10.18653/v1/P17-1017},
doi = {10.18653/v1/P17-1017}
} | null | 2 | 6 | ---
annotations_creators:
- found
language_creators:
- crowdsourced
language:
- br
- cy
- ga
- mt
- ru
license:
- cc-by-sa-3.0
- cc-by-nc-sa-4.0
- gfdl
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-db_pedia
- original
task_categories:
- tabular-to-text
task_ids:
- rdf-to-text
paperswithcode_id: null
pretty_name: WebNLG 2023 challenge
dataset_info:
- config_name: br
features:
- name: category
dtype: string
- name: size
dtype: int32
- name: eid
dtype: string
- name: original_triple_sets
sequence:
- name: otriple_set
sequence: string
- name: modified_triple_sets
sequence:
- name: mtriple_set
sequence: string
- name: shape
dtype: string
- name: shape_type
dtype: string
- name: lex
sequence:
- name: comment
dtype: string
- name: lid
dtype: string
- name: text
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 14841422
num_examples: 13211
- name: validation
num_bytes: 1394620
num_examples: 1399
download_size: 10954332
dataset_size: 16236042
- config_name: cy
features:
- name: category
dtype: string
- name: size
dtype: int32
- name: eid
dtype: string
- name: original_triple_sets
sequence:
- name: otriple_set
sequence: string
- name: modified_triple_sets
sequence:
- name: mtriple_set
sequence: string
- name: shape
dtype: string
- name: shape_type
dtype: string
- name: lex
sequence:
- name: comment
dtype: string
- name: lid
dtype: string
- name: text
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 15070109
num_examples: 13211
- name: validation
num_bytes: 1605315
num_examples: 1665
download_size: 10954332
dataset_size: 16675424
- config_name: ga
features:
- name: category
dtype: string
- name: size
dtype: int32
- name: eid
dtype: string
- name: original_triple_sets
sequence:
- name: otriple_set
sequence: string
- name: modified_triple_sets
sequence:
- name: mtriple_set
sequence: string
- name: shape
dtype: string
- name: shape_type
dtype: string
- name: lex
sequence:
- name: comment
dtype: string
- name: lid
dtype: string
- name: text
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 15219249
num_examples: 13211
- name: validation
num_bytes: 1621527
num_examples: 1665
download_size: 10954332
dataset_size: 16840776
- config_name: mt
features:
- name: category
dtype: string
- name: size
dtype: int32
- name: eid
dtype: string
- name: original_triple_sets
sequence:
- name: otriple_set
sequence: string
- name: modified_triple_sets
sequence:
- name: mtriple_set
sequence: string
- name: shape
dtype: string
- name: shape_type
dtype: string
- name: lex
sequence:
- name: comment
dtype: string
- name: lid
dtype: string
- name: text
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 15281045
num_examples: 13211
- name: validation
num_bytes: 1611988
num_examples: 1665
download_size: 10954332
dataset_size: 16893033
- config_name: ru
features:
- name: category
dtype: string
- name: size
dtype: int32
- name: eid
dtype: string
- name: original_triple_sets
sequence:
- name: otriple_set
sequence: string
- name: modified_triple_sets
sequence:
- name: mtriple_set
sequence: string
- name: shape
dtype: string
- name: shape_type
dtype: string
- name: lex
sequence:
- name: comment
dtype: string
- name: lid
dtype: string
- name: text
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 8145815
num_examples: 5573
- name: validation
num_bytes: 1122090
num_examples: 790
download_size: 10954332
dataset_size: 9267905
---
# Dataset Card for WebNLG
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WebNLG 2023 challenge](https://synalp.gitlabpages.inria.fr/webnlg-challenge/challenge_2023/)
- **Repository:** [GitHub repository](https://github.com/WebNLG/2023-Challenge)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [webnlg-challenge@inria.fr](mailto:webnlg-challenge@inria.fr)
### Dataset Summary
The WebNLG 2023 challenge focuses on four under-resourced languages which are severely under-represented in research on
text generation, namely Maltese, Irish, Breton and Welsh. In addition, WebNLG 2023 once again includes Russian, which
was first featured in WebNLG 2020.
The challenge focuses on RDF-to-text generation, similarly to WebNLG 2017 but targeting Breton, Irish, Maltese, Welsh,
and Russian;
The challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a set of
triples extracted from DBpedia and the text is a verbalisation of these triples.
For instance, given the 4 RDF triples:
```
<entry category="Company" eid="Id21" shape="(X (X) (X) (X) (X))" shape_type="sibling" size="4">
<modifiedtripleset>
<mtriple>Trane | foundingDate | 1913-01-01</mtriple>
<mtriple>Trane | location | Ireland</mtriple>
<mtriple>Trane | foundationPlace | La_Crosse,_Wisconsin</mtriple>
<mtriple>Trane | numberOfEmployees | 29000</mtriple>
</modifiedtripleset>
</entry>
```
the aim is to generate a text such as (English text):
```
Trane, which was founded on January 1st 1913 in La Crosse, Wisconsin, is based in Ireland. It has 29,000 employees.
```
or (Russian text):
```
Компания "Тране", основанная 1 января 1913 года в Ла-Кроссе в штате Висконсин, находится в Ирландии. В компании работают 29 тысяч человек.
```
As the example illustrates, the task involves specific NLG subtasks such as sentence segmentation
(how to chunk the input data into sentences), lexicalisation (of the DBpedia properties),
aggregation (how to avoid repetitions) and surface realisation
(how to build a syntactically correct and natural sounding text).
### Supported Tasks and Leaderboards
The dataset supports a Structured to Text task which requires a model takes a set of RDF (Resource Description Format)
triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural language
sentence expressing the information contained in the triples.
The dataset is used in the [WebNLG 2023](https://synalp.gitlabpages.inria.fr/webnlg-challenge/challenge_2023/)
challenge.
Results are evaluated with automatic metrics: [BLEU](https://huggingface.co/metrics/bleu),
[METEOR](https://huggingface.co/metrics/meteor), [ChrF++](https://huggingface.co/metrics/chrf),
[TER](https://huggingface.co/metrics/ter) and [BERTscore](https://huggingface.co/metrics/bertscore).
Additionally, result are assessed according to criteria such as grammaticality/correctness, appropriateness/adequacy,
fluency/naturalness, etc., by native speakers.
### Languages
The dataset comprises Breton (`br`), Welsh (`cy`), Irish (`ga`), Maltese (`mt`) and Russian (`ru`) languages.
## Dataset Structure
### Data Instances
A typical example contains the original RDF triples in the set, a modified version which presented to crowd workers,
and a set of possible verbalizations for this set of triples:
```
{'category': 'Airport',
'size': 1,
'eid': '1',
'original_triple_sets': {'otriple_set': [['Aarhus_Airport | cityServed | "Aarhus, Denmark"@en']]},
'modified_triple_sets': {'mtriple_set': [['Aarhus_Airport | cityServed | "Aarhus, Denmark"']]},
'shape': '(X (X))',
'shape_type': 'NA',
'lex': {'comment': ['good', 'good', '', ''],
'lid': ['Id1', 'Id2', 'Id3', 'Id3'],
'text': ['Aarhus a zo an aro-vezh Aarhus.',
"Aarhus a servijit ar c'hêr Aarhus.",
'The Aarhus is the airport of Aarhus, Denmark.',
'Aarhus Airport serves the city of Aarhus, Denmark.'],
'lang': ['br', 'br', 'en', 'en']}}
```
### Data Fields
The following fields can be found in the instances:
- `category`: the category of the DBpedia entities present in the RDF triples.
- `eid`: an example ID, only unique per split per category.
- `size`: number of RDF triples in the set.
- `shape`: (since v2) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. `shape` is a string representation of the tree with nested parentheses where X is a node (see [Newick tree format](https://en.wikipedia.org/wiki/Newick_format))
- `shape_type`: (since v2) is a type of the tree shape, which can be: `chain` (the object of one triple is the subject of the other); `sibling` (triples with a shared subject); `mixed` (both chain and sibling types present).
- `test_category`: (for `webnlg_challenge_2017` and `v3`) tells whether the set of RDF triples was present in the training set or not. Several splits of the test set are available: with and without references, and for RDF-to-text generation / for semantic parsing.
- `lex`: the lexicalizations, with:
- `text`: the text to be predicted.
- `lid`: a lexicalization ID, unique per example.
- `comment`: the lexicalizations were rated by crowd workers are either `good` or `bad`
- `lang`: (for `release_v3.0_ru`) the language used because original English texts were kept in the Russian version.
### Data Splits
The dataset is split into train and validation:
| language | train | validation |
|----------|------:|-----------:|
| br | 13211 | 1399 |
| cy | 13211 | 1665 |
| ga | 13211 | 1665 |
| mt | 13211 | 1665 |
| ru | 5573 | 790 |
## Dataset Creation
### Curation Rationale
The WebNLG dataset was created to promote the development _(i)_ of RDF verbalisers and _(ii)_ of microplanners able to handle a wide range of linguistic constructions. The dataset aims at covering knowledge in different domains ("categories"). The same properties and entities can appear in several categories.
### Source Data
The data was compiled from raw DBpedia triples. [This paper](https://www.aclweb.org/anthology/C16-1141/) explains how the triples were selected.
#### Initial Data Collection and Normalization
Initial triples extracted from DBpedia were modified in several ways. See [official documentation](https://webnlg-challenge.loria.fr/docs/) for the most frequent changes that have been made. An original tripleset and a modified tripleset usually represent a one-to-one mapping. However, there are cases with many-to-one mappings when several original triplesets are mapped to one modified tripleset.
Entities that served as roots of RDF trees are listed in [this file](https://gitlab.com/shimorina/webnlg-dataset/-/blob/master/supplementary/entities_dict.json).
The English WebNLG 2020 dataset (v3.0) for training comprises data-text pairs for 16 distinct DBpedia categories:
- The 10 seen categories used in the 2017 version: Airport, Astronaut, Building, City, ComicsCharacter, Food, Monument, SportsTeam, University, and WrittenWork.
- The 5 unseen categories of 2017, which are now part of the seen data: Athlete, Artist, CelestialBody, MeanOfTransportation, Politician.
- 1 new category: Company.
The Russian dataset (v3.0) comprises data-text pairs for 9 distinct categories: Airport, Astronaut, Building, CelestialBody, ComicsCharacter, Food, Monument, SportsTeam, and University.
#### Who are the source language producers?
There are no source texts, all textual material was compiled during the annotation process.
### Annotations
#### Annotation process
Annotators were first asked to create sentences that verbalise single triples. In a second round, annotators were asked to combine single-triple sentences together into sentences that cover 2 triples. And so on until 7 triples. Quality checks were performed to ensure the quality of the annotations. See Section 3.3 in [the dataset paper](https://www.aclweb.org/anthology/P17-1017.pdf).
Russian data was translated from English with an MT system and then was post-edited by crowdworkers. See Section 2.2 of [this paper](https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.7.pdf).
#### Who are the annotators?
All references were collected through crowdsourcing platforms (CrowdFlower/Figure 8 and Amazon Mechanical Turk). For Russian, post-editing was done using the Yandex.Toloka crowdsourcing platform.
### Personal and Sensitive Information
Neither the dataset as published or the annotation process involves the collection or sharing of any kind of personal / demographic information.
## Considerations for Using the Data
### Social Impact of Dataset
We do not foresee any negative social impact in particular from this dataset or task.
Positive outlooks: Being able to generate good quality text from RDF data would permit, e.g., making this data more accessible to lay users, enriching existing text with information drawn from knowledge bases such as DBpedia or describing, comparing and relating entities present in these knowledge bases.
### Discussion of Biases
This dataset is created using DBpedia RDF triples which naturally exhibit biases that have been found to exist in Wikipedia such as some forms of, e.g., gender bias.
The choice of [entities](https://gitlab.com/shimorina/webnlg-dataset/-/blob/master/supplementary/entities_dict.json), described by RDF trees, was not controlled. As such, they may contain gender biases; for instance, all the astronauts described by RDF triples are male. Hence, in texts, pronouns _he/him/his_ occur more often. Similarly, entities can be related to the Western culture more often than to other cultures.
### Other Known Limitations
The quality of the crowdsourced references is limited, in particular in terms of fluency/naturalness of the collected texts.
Russian data was machine-translated and then post-edited by crowdworkers, so some examples may still exhibit issues related to bad translations.
## Additional Information
### Dataset Curators
The principle curator of the dataset is Anastasia Shimorina (Université de Lorraine / LORIA, France). Throughout the WebNLG releases, several people contributed to their construction: Claire Gardent (CNRS / LORIA, France), Shashi Narayan (Google, UK), Laura Perez-Beltrachini (University of Edinburgh, UK), Elena Khasanova, and Thiago Castro Ferreira (Federal University of Minas Gerais, Brazil).
The dataset construction was funded by the French National Research Agency (ANR).
### Licensing Information
The dataset uses the `cc-by-nc-sa-4.0` license. The source DBpedia project uses the `cc-by-sa-3.0` and `gfdl-1.1` licenses.
### Citation Information
If you use the WebNLG corpus, cite:
```
@inproceedings{web_nlg,
author = {Claire Gardent and
Anastasia Shimorina and
Shashi Narayan and
Laura Perez{-}Beltrachini},
editor = {Regina Barzilay and
Min{-}Yen Kan},
title = {Creating Training Corpora for {NLG} Micro-Planners},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational
Linguistics, {ACL} 2017, Vancouver, Canada, July 30 - August 4, Volume
1: Long Papers},
pages = {179--188},
publisher = {Association for Computational Linguistics},
year = {2017},
url = {https://doi.org/10.18653/v1/P17-1017},
doi = {10.18653/v1/P17-1017}
}
```
### Contributions
Thanks to [@albertvillanova](https://huggingface.co/albertvillanova) for adding this dataset. |
society-ethics/papers | 2023-05-31T13:53:19.000Z | [
"ethics",
"arxiv:1906.02569",
"arxiv:1910.01108",
"arxiv:2109.14076",
"arxiv:2205.02894",
"arxiv:2206.03216",
"arxiv:2103.12028",
"arxiv:2111.04424",
"arxiv:2208.11695",
"arxiv:2212.05129",
"arxiv:2205.12586",
"arxiv:2210.05839",
"arxiv:2110.08207",
"arxiv:2211.05100",
"arxiv:2303.03915"... | society-ethics | null | null | null | 7 | 6 | ---
tags:
- ethics
---
# Hugging Face Ethics & Society Papers
This is an incomplete list of ethics-related papers published by researchers at Hugging Face.
- Gradio: https://arxiv.org/abs/1906.02569
- DistilBERT: https://arxiv.org/abs/1910.01108
- RAFT: https://arxiv.org/abs/2109.14076
- Interactive Model Cards: https://arxiv.org/abs/2205.02894
- Data Governance in the Age of Large-Scale Data-Driven Language Technology: https://arxiv.org/abs/2206.03216
- Quality at a Glance: https://arxiv.org/abs/2103.12028
- A Framework for Deprecating Datasets: https://arxiv.org/abs/2111.04424
- Bugs in the Data: https://arxiv.org/abs/2208.11695
- Measuring Data: https://arxiv.org/abs/2212.05129
- Perturbation Augmentation for Fairer NLP: https://arxiv.org/abs/2205.12586
- SEAL: https://arxiv.org/abs/2210.05839
- Multitask Prompted Training Enables Zero-Shot Task Generalization: https://arxiv.org/abs/2110.08207
- BLOOM: https://arxiv.org/abs/2211.05100
- ROOTS: https://arxiv.org/abs/2303.03915
- Evaluate & Evaluation on the Hub: https://arxiv.org/abs/2210.01970
- Spacerini: https://arxiv.org/abs/2302.14534
- ROOTS Search Tool: https://arxiv.org/abs/2302.14035
- Fair Diffusion: https://arxiv.org/abs/2302.10893
- Counting Carbon: https://arxiv.org/abs/2302.08476
- The Gradient of Generative AI Release: https://arxiv.org/abs/2302.04844
- BigScience: A Case Study in the Social Construction of a Multilingual Large Language Model: https://arxiv.org/abs/2212.04960
- Towards Openness Beyond Open Access: User Journeys through 3 Open AI Collaboratives: https://arxiv.org/abs/2301.08488
- Stable Bias: Analyzing Societal Representations in Diffusion Models: https://arxiv.org/abs/2303.11408
- Stronger Together: on the Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML: https://arxiv.org/abs/2305.18615 |
cartesinus/leyzer-fedcsis | 2023-03-15T00:12:59.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"language:pl",
"language:es",
"license:cc-by-4.0",
"natural-language-understanding",
"region:us"
] | cartesinus | Leyzer is a multilingual text corpus designed to study multilingual and cross-lingual natural language
understanding (NLU) models and the strategies of localization of virtual assistants. It consists of 20
domains across three languages: English, Spanish and Polish, with 186 intents and a wide range of
samples, ranging from 1 to 672 sentences per intent. | @inproceedings{sowanski2020leyzer,
title={Leyzer: A Dataset for Multilingual Virtual Assistants},
author={Sowa{\'n}ski, Marcin and Janicki, Artur},
booktitle={International Conference on Text, Speech, and Dialogue},
pages={477--486},
year={2020},
organization={Springer}
} | null | 0 | 6 | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- en
- pl
- es
tags:
- natural-language-understanding
size_categories:
- 10K<n<100K
---
# Leyzer: A Dataset for Multilingual Virtual Assistants
Leyzer is a multilingual text corpus designed to study multilingual and cross-lingual natural language understanding (NLU) models and the strategies of localization of
virtual assistants. It consists of 20 domains across three languages: English, Spanish and Polish, with 186 intents and a wide range of samples, ranging from 1 to 672
sentences per intent. For more stats please refer to wiki.
|
mfumanelli/pokemon-description-xs | 2023-03-20T11:12:15.000Z | [
"region:us"
] | mfumanelli | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: name
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 2839
num_examples: 20
download_size: 4230
dataset_size: 2839
---
# Dataset Card for "pokemon-description-xs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Fearao/guba_eastmoney | 2023-03-19T04:53:07.000Z | [
"task_categories:text-classification",
"language:zh",
"region:us"
] | Fearao | null | null | null | 1 | 6 | ---
task_categories:
- text-classification
language:
- zh
---
数据来自东方财富股吧的评论,经过人工label |
reginaboateng/pico_ebmnlp | 2023-03-20T14:02:22.000Z | [
"region:us"
] | reginaboateng | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: chunk_tags
sequence: string
- name: pos_tags
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': I-INT
'2': I-OUT
'3': I-PAR
splits:
- name: train
num_bytes: 27639457
num_examples: 23952
- name: test
num_bytes: 1482730
num_examples: 2064
- name: validation
num_bytes: 7446993
num_examples: 7049
download_size: 4096177
dataset_size: 36569180
---
# Dataset Card for "pico_ebmnlp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZurichNLP/swissner | 2023-03-24T08:37:30.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:multilingual",
"size_categories:n<1K",
"language:de",
"language:fr",
"language:it",
"language:rm",
"license:cc-by-4.0",
"arxiv:2303.13310",
"region:us"
] | ZurichNLP | null | null | null | 1 | 6 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: string
- name: url
dtype: string
splits:
- name: test_de
num_bytes: 164433
num_examples: 200
- name: test_fr
num_bytes: 186036
num_examples: 200
- name: test_it
num_bytes: 197513
num_examples: 200
- name: test_rm
num_bytes: 206644
num_examples: 200
download_size: 220352
dataset_size: 754626
license: cc-by-4.0
task_categories:
- token-classification
task_ids:
- named-entity-recognition
language:
- de
- fr
- it
- rm
multilinguality:
- multilingual
pretty_name: SwissNER
size_categories:
- n<1K
---
# SwissNER
A multilingual test set for named entity recognition (NER) on Swiss news articles.
## Description
SwissNER is a dataset for named entity recognition based on manually annotated news articles in Swiss Standard German, French, Italian, and Romansh Grischun.
We have manually annotated a selection of articles that have been published in February 2023 in the categories "Switzerland" or "Regional" on the following online news portals:
- Swiss Standard German: [srf.ch](https://www.srf.ch/)
- French: [rts.ch](https://www.rts.ch/)
- Italian: [rsi.ch](https://www.rsi.ch/)
- Romansh Grischun: [rtr.ch](https://www.rtr.ch/)
For each article we extracted the first two paragraphs after the lead paragraph.
We followed the guidelines of the CoNLL-2002 and 2003 shared tasks and annotated the names of persons, organizations, locations and miscellaneous entities.
The annotation was performed by a single annotator.
## License
- Text paragraphs: © Swiss Broadcasting Corporation (SRG SSR)
- Annotations: Attribution 4.0 International (CC BY 4.0)
## Statistics
| | DE | FR | IT | RM | Total |
|----------------------|-----:|------:|------:|------:|------:|
| Number of paragraphs | 200 | 200 | 200 | 200 | 800 |
| Number of tokens | 9498 | 11434 | 12423 | 13356 | 46711 |
| Number of entities | 479 | 475 | 556 | 591 | 2101 |
| – `PER` | 104 | 92 | 93 | 118 | 407 |
| – `ORG` | 193 | 216 | 266 | 227 | 902 |
| – `LOC` | 182 | 167 | 197 | 246 | 792 |
| – `MISC` | 113 | 79 | 88 | 39 | 319 |
## Citation
```bibtex
@article{vamvas-etal-2023-swissbert,
title={Swiss{BERT}: The Multilingual Language Model for Switzerland},
author={Jannis Vamvas and Johannes Gra\"en and Rico Sennrich},
year={2023},
eprint={2303.13310},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2303.13310}
}
``` |
cahya/instructions-id | 2023-03-22T12:47:41.000Z | [
"region:us"
] | cahya | null | null | null | 1 | 6 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 35749284.66851785
num_examples: 85242
- name: test
num_bytes: 1986211.1657410732
num_examples: 4736
- name: validation
num_bytes: 1986211.1657410732
num_examples: 4736
download_size: 21158281
dataset_size: 39721706.99999999
---
# Dataset Card for "instructions-id"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cahya/instructions-ar | 2023-03-22T15:42:43.000Z | [
"region:us"
] | cahya | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1335708.4343484773
num_examples: 1802
- name: test
num_bytes: 74864.90114827758
num_examples: 101
- name: validation
num_bytes: 74123.66450324513
num_examples: 100
download_size: 0
dataset_size: 1484697.0
---
# Dataset Card for "instructions-ar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-source-metrics/preprocessed_stars | 2023-08-23T19:55:22.000Z | [
"region:us"
] | open-source-metrics | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: accelerate
dtype: int64
- name: datasets
dtype: int64
- name: diffusers
dtype: int64
- name: evaluate
dtype: int64
- name: gradio
dtype: int64
- name: hub_docs
dtype: int64
- name: huggingface_hub
dtype: int64
- name: optimum
dtype: int64
- name: peft
dtype: int64
- name: pytorch_image_models
dtype: int64
- name: safetensors
dtype: int64
- name: tokenizers
dtype: int64
- name: transformers
dtype: int64
- name: langchain
dtype: int64
- name: pytorch
dtype: int64
- name: stable_diffusion_webui
dtype: int64
- name: tensorflow
dtype: int64
- name: day
dtype: string
splits:
- name: raw
num_bytes: 16368
num_examples: 101
- name: wow
num_bytes: 16528
num_examples: 102
download_size: 32298
dataset_size: 32896
configs:
- config_name: default
data_files:
- split: raw
path: data/raw-*
- split: wow
path: data/wow-*
---
# Dataset Card for "preprocessed_stars"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-source-metrics/preprocessed_issues | 2023-08-23T20:11:07.000Z | [
"region:us"
] | open-source-metrics | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: accelerate
dtype: int64
- name: datasets
dtype: int64
- name: diffusers
dtype: int64
- name: evaluate
dtype: int64
- name: gradio
dtype: int64
- name: hub_docs
dtype: int64
- name: huggingface_hub
dtype: int64
- name: optimum
dtype: int64
- name: peft
dtype: int64
- name: pytorch_image_models
dtype: int64
- name: safetensors
dtype: int64
- name: tokenizers
dtype: int64
- name: transformers
dtype: int64
- name: langchain
dtype: int64
- name: pytorch
dtype: int64
- name: stable_diffusion_webui
dtype: int64
- name: tensorflow
dtype: int64
- name: day
dtype: string
splits:
- name: raw
num_bytes: 16368
num_examples: 101
- name: wow
num_bytes: 16368
num_examples: 101
- name: eom
num_bytes: 16368
num_examples: 101
- name: eom_wow
num_bytes: 16368
num_examples: 101
download_size: 64567
dataset_size: 65472
configs:
- config_name: default
data_files:
- split: raw
path: data/raw-*
- split: wow
path: data/wow-*
- split: eom
path: data/eom-*
- split: eom_wow
path: data/eom_wow-*
---
# Dataset Card for "preprocessed_issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dhuck/functional_code | 2023-04-05T15:51:51.000Z | [
"task_categories:text-generation",
"task_categories:feature-extraction",
"size_categories:100K<n<1M",
"license:afl-3.0",
"Program Synthesis",
"code",
"region:us"
] | dhuck | null | null | null | 0 | 6 | ---
license: afl-3.0
task_categories:
- text-generation
- feature-extraction
tags:
- Program Synthesis
- code
pretty_name: Functional Code
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: _id
dtype: string
- name: repository
dtype: string
- name: name
dtype: string
- name: content
dtype: string
- name: license
dtype: 'null'
- name: download_url
dtype: string
- name: language
dtype: string
- name: comments
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 7561888852
num_examples: 611738
- name: test
num_bytes: 1876266819
num_examples: 152935
download_size: 3643404015
dataset_size: 9438155671
---
# Dataset Card for Dataset Name
## Dataset Description
Collection of functional programming languages from GitHub.
- **Point of Contact:** dhuck
### Dataset Summary
This dataset is a collection of code examples of functional programming languages for code generation tasks. It was collected over a week long period in March 2023 as part of project in program synthesis.
## Dataset Structure
### Data Instances
```
{
'id': str
'repository': str
'filename': str
'license': str or Empty
'language': str
'content': str
}
```
### Data Fields
* `id`: SHA256 has of the content field. This ID scheme ensure that duplicate code examples via forks or other duplications are removed from the dataset.
* 'repository': The repository that the file was pulled from. This can be used for any attribution or to check updated licensing issues for the code example.
* 'filename': Filename of the code example from within the repository.
* 'license': Licensing information of the repository. This can be empty and further work is likely necessary to parse licensing information from individual files.
* 'language': Programming language of the file. For example, Haskell, Clojure, Lisp, etc...
* 'content': Source code of the file. This is full text of the source with some cleaning as described in the Curation section below. While many examples are short, others can be extremely long. This field will like require preprocessing for end tasks.
### Data Splits
More information to be provided at a later date. There are 157,218 test examples and 628,869 training examples. The split was created using `scikit-learn`' `test_train_split` function.
## Dataset Creation
### Curation Rationale
This dataset was put together for Programming Synthesis tasks. The majority of available datasets consist of imperative programming languages, while the program synthesis community has a rich history of methods using functional languages. This dataset aims to unify the two approaches by making a large training corpus of functional languages available to researchers.
### Source Data
#### Initial Data Collection and Normalization
Code examples were collected in a similar manner to other existing programming language datasets. Each example was pulled from public repositories on GitHub over a week in March 2023. I performed this task by searching common file extensions of the target languages (Clojure, Elixir, Haskell, Lisp, OCAML, Racket and Scheme). The full source is included for each coding example, so padding or truncation will be necessary for any training tasks. Significant effort was made to remove any personal information from each coding example. For each code example, I removed any email address or websites using simple regex pattern matching. Spacy NER was used to identify proper names in the comments only. Any token which spanned a name was simply replaced with the token `PERSON` while email addresses and websites were dropped from each comment. Organizations and other information were left intact.
#### Who are the source language producers?
Each example contains the repository the code originated from, identifying the source of each example.
### Personal and Sensitive Information
While great care was taken to remove proper names, email addresses, and websites, there may exist examples where pattern matching did not work. While I used the best spacy models available, I did witness false negatives on other tasks on other datasets. To ensure no personal information makes it into training data, it is advisable to remove all comments if the training task does not require them. I made several PR to the `comment_parser` python library to support the languages in this dataset. My version of the parsing library can be found at [https://github.com/d-huck/comment_parser](https://github.com/d-huck/comment_parser)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
While code itself may not contain bias, programmers can use offensive, racist, homophobic, transphobic, misogynistic, etc words for variable names. Further updates to this dataset library will investigate and address these issues. Comments in the code examples could also contain hateful speech. Models trained on this dataset may need additional training on toxicity to remove these tendencies from the output.
### Other Known Limitations
The code present in this dataset has not been checked for quality in any way. It is possible and probable that several of the coding examples are of poor quality and do not actually compile or run in their target language. Furthermore, there exists a chance that some examples are not the language they claim to be, since github search matching is dependent only on the file extension and not the actual contents of any file. |
drzraf/petfinder-dogs | 2023-03-31T18:47:42.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:unknown",
"pets",
"dogs",
"animals",
"photos",
"region:us"
] | drzraf | null | null | null | 2 | 6 | ---
annotations_creators: []
language_creators:
- crowdsourced
license:
- unknown
multilinguality:
- monolingual
pretty_name: 300px dogs photos from Petfinder
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- pets
- dogs
- animals
- photos
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
---
# Dataset Card for "petfinder-dogs"
## Dataset Description
- **Homepage:** https://www.petfinder.com/
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Contains 700k+ 300px-wide images of 150k+ distinct dogs extracted from the PetFinder API in March 2023.
Only those having at least 4 photos are present: Each subject has between 4 and 12 photos.
This dataset aims to simplify AI work based on dogs' images and avoid rescraping thousands of them from the PetFinder API again and again.
|
nkasmanoff/nasa_earth_instagram | 2023-03-30T11:04:45.000Z | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"size_categories:n<1K",
"region:us"
] | nkasmanoff | null | null | null | 0 | 6 | ---
task_categories:
- image-to-text
- text-to-image
size_categories:
- n<1K
---
# NASA Earth Instagram
This dataset is a moderately curated subset of the posts shown on [NASA Earth's Instagram](https://www.instagram.com/nasaearth/), with an emphasis
on finding image-text pairs where the text associated is as close as possible to being a direct caption of the image in question.
This dataset has a variety of use cases, but the one which it is originally intended for is to provide a fine-tuning dataset for image captioning models,
to be better equipped for describing the exact pheonomena in satellite imagery.
The owner of all images and text in this data is NASA. |
mikegarts/oa_tell_a_joke_20000 | 2023-04-02T12:44:50.000Z | [
"language:en",
"license:mit",
"region:us"
] | mikegarts | null | null | null | 1 | 6 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
struct:
- name: link
dtype: string
- name: nsfw
dtype: bool
splits:
- name: train
num_bytes: 11848430
num_examples: 20000
download_size: 6222319
dataset_size: 11848430
license: mit
language:
- en
---
# Dataset Card for "oa_tell_a_joke_20000"
This dataset is based on the SocialGrep/one-million-reddit-jokes dataset, and augmented using KeyBert to be used for the [Open Assistant project](https://github.com/LAION-AI/Open-Assistant).
Addition details of dataset creation are [here](https://github.com/mikegarts/Open-Assistant/blob/OA-261.tell_a_joke_dataset/data/datasets/tell_a_joke/tell_a_joke.ipynb)
# Data fields:
### INSTRUCTION - The instruction to the assistant
### RESPONSE - The response of the assistant
### SOURCE - source of the data
### METADATA - additional link, such as a link to the source webpage on reddit
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sin3142/memes-1500 | 2023-04-06T05:11:46.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"region:us"
] | sin3142 | null | null | null | 1 | 6 | ---
task_categories:
- image-classification
size_categories:
- 1K<n<10K
--- |
Amirkid/jokes | 2023-04-06T19:45:55.000Z | [
"license:creativeml-openrail-m",
"region:us"
] | Amirkid | null | null | null | 0 | 6 | ---
license: creativeml-openrail-m
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 131818111
num_examples: 578634
download_size: 86215403
dataset_size: 131818111
---
|
one-sec-cv12/chunk_0 | 2023-04-06T21:46:09.000Z | [
"region:us"
] | one-sec-cv12 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 21981833424.125
num_examples: 228863
download_size: 18831760350
dataset_size: 21981833424.125
---
# Dataset Card for "chunk_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
camel-ai/ai_society_translated | 2023-05-23T21:12:39.000Z | [
"task_categories:text-generation",
"language:ar",
"language:zh",
"language:ko",
"language:ja",
"language:hi",
"language:ru",
"language:es",
"language:fr",
"language:de",
"language:it",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"arxiv:2303.17760",
"region:us"
] | camel-ai | null | null | null | 12 | 6 | ---
license: cc-by-nc-4.0
language:
- ar
- zh
- ko
- ja
- hi
- ru
- es
- fr
- de
- it
tags:
- instruction-finetuning
pretty_name: CAMEL AI Society Translated
task_categories:
- text-generation
arxiv: 2303.17760
extra_gated_prompt: "By using this data, you acknowledge and agree to utilize it solely for research purposes, recognizing that the dataset may contain inaccuracies due to its artificial generation through ChatGPT."
extra_gated_fields:
Name: text
Email: text
I will adhere to the terms and conditions of this dataset: checkbox
---
# **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society**
- **Github:** https://github.com/lightaime/camel
- **Website:** https://www.camel-ai.org/
- **Arxiv Paper:** https://arxiv.org/abs/2303.17760
## Dataset Summary
The original AI Society dataset is in English and is composed of 25K conversations between two gpt-3.5-turbo agents. The dataset is obtained by running role-playing for a combination of 50 user roles and 50 assistant roles with each combination running over 10 tasks.
We provide translated versions of the original English dataset into ten languages: Arabic, Chinese, Korean, Japanese, Hindi, Russian, Spanish, French, German, and Italian in ".zip" format.
The dataset was translated by a prompting gpt-3.5-turbo to translate presented sentences into a particular language.
**Note:** Sometimes gpt decides not to translate particular keywords such as "Instruction", "Input", and "Solution". Therefore, cleaning might be needed depended on your use case.
## Data Fields
**The data fields for chat format (`ai_society_chat_{language}.zip`) are as follows:**
* `input`: {assistant\_role\_index}\_{user\_role\_index}\_{task\_index}, for example 001_002_003 refers to assistant role 1, user role 2, and task 3 from our text assistant role names, user role names and task text files.
* `role_1`: assistant role
* `role_2`: user role
* `original_task`: the general assigned task for the assistant and user to cooperate on.
* `specified_task`: the task after task specifier, this task is more specific than the original task.
* `message_k`: refers to the k<sup>_th_</sup> message of the conversation.
* `role_type`: refers to whether the agent is an assistant or a user.
* `role_name`: refers to the assigned assistant/user role.
* `role`: refers to the role of the agent during the message for openai api. [usually not needed]
* `content`: refers to the content of the message.
* `termination_reason`: refers to the reason of termination of the chat.
* `num_messages`: refers to the total number of messages in the chat.
**Download in python**
```
from huggingface_hub import hf_hub_download
# replace {language} by one of the following: ar, zh, ko, ja, hi, ru, es, fr, de, it
hf_hub_download(repo_id="camel-ai/ai_society_translated", repo_type="dataset", filename="ai_society_chat_{language}.zip",
local_dir="datasets/", local_dir_use_symlinks=False)
```
### Citation
```
@misc{li2023camel,
title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society},
author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem},
year={2023},
eprint={2303.17760},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer:
This data was synthetically generated by gpt-3.5-turbo and might contain incorrect information. The dataset is there only for research purposes.
---
license: cc-by-nc-4.0
--- |
ehartford/leet10k-alpaca | 2023-05-02T05:44:45.000Z | [
"license:apache-2.0",
"region:us"
] | ehartford | null | null | null | 11 | 6 | ---
license: apache-2.0
---
|
nasa-cisto-data-science-group/modis-lake-powell-raster-dataset | 2023-04-11T18:19:51.000Z | [
"license:apache-2.0",
"region:us"
] | nasa-cisto-data-science-group | null | null | null | 0 | 6 | ---
license: apache-2.0
---
# MODIS Water Lake Powell Raster Dataset
### Dataset Summary
Raster dataset comprised of MODIS surface reflectance bands along with calculated indices and a label (water/not-water)
## Dataset Structure
### Data Fields
- `water`: Label, water or not-water (binary)
- `sur_refl_b01_1`: MODIS surface reflection band 1 (-100, 16000)
- `sur_refl_b02_1`: MODIS surface reflection band 2 (-100, 16000)
- `sur_refl_b03_1`: MODIS surface reflection band 3 (-100, 16000)
- `sur_refl_b04_1`: MODIS surface reflection band 4 (-100, 16000)
- `sur_refl_b05_1`: MODIS surface reflection band 5 (-100, 16000)
- `sur_refl_b06_1`: MODIS surface reflection band 6 (-100, 16000)
- `sur_refl_b07_1`: MODIS surface reflection band 7 (-100, 16000)
- `ndvi`: Normalized differential vegetation index (-20000, 20000)
- `ndwi1`: Normalized differential water index 1 (-20000, 20000)
- `ndwi2`: Normalized differential water index 2 (-20000, 20000)
## Dataset Creation
## Source Data
[MODIS MOD44W](https://lpdaac.usgs.gov/products/mod44wv006/)
[MODIS MOD09GA](https://lpdaac.usgs.gov/products/mod09gav006/)
[MODIS MOD09GQ](https://lpdaac.usgs.gov/products/mod09gqv006/)
## Annotation process
Labels were created by using the MOD44W C6 product to designate pixels in MODIS surface reflectance products as land or water. |
arnavmahapatra/fruit-detection-dataset | 2023-04-15T17:29:40.000Z | [
"license:cc-by-4.0",
"region:us"
] | arnavmahapatra | null | null | null | 0 | 6 | ---
license: cc-by-4.0
---
|
TempoFunk/tempofunk-sdance | 2023-05-07T07:38:48.000Z | [
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:video-classification",
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:agpl-3.0",
"region:us"
] | TempoFunk | null | null | null | 2 | 6 | ---
task_categories:
- text-to-video
- text-to-image
- video-classification
- image-classification
language:
- en
size_categories:
- 1K<n<10K
license: agpl-3.0
---
# TempoFunk S(mall)Dance
10k samples of metadata and encoded latents & prompts of videos themed around **dance**.
## Data format
- Video frame latents
- Numpy arrays
- 120 frames, 512x512 source size
- Encoded shape (120, 4, 64, 64)
- CLIP (openai) encoded prompts
- Video description (as seen in metadata)
- Encoded shape (77,768)
- Video metadata as JSON (description, tags, categories, source URLs, etc.) |
LevMuchnik/SupremeCourtOfIsrael | 2023-04-27T06:01:49.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-retrieval",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:document-retrieval",
"size_categories:100K<n<1M",
"language:he",
"license:openrail",
"legal, verdicts, metadata, hebrew",
... | LevMuchnik | null | null | null | 4 | 6 | ---
license: openrail
language:
- he
tags:
- legal, verdicts, metadata, hebrew
pretty_name: Supreme Court Israel - Public Verdicts and Decisions
size_categories:
- 100K<n<1M
task_ids:
- language-modeling
- masked-language-modeling
- document-retrieval
task_categories:
- text-generation
- fill-mask
- text-retrieval
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
Lev Muchnik, lev.muchnik@mail.huji.ac.il
### Dataset Summary
This dataset represents a 2022 snapshot of the Supreme Court of Israel public verdicts and decisions supported by rich metadata. The 5.31GB dataset represents 751,194 documents.
Overall, the dataset contains 2.68 Gb of text.
It can be loaded with the dataset package:
```
import datasets
data = datasets.load_dataset('LevMuchnik/SupremeCourtOfIsrael')
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The vast majority of the documents in the database are in Hebrew. A small number of documents are in English.
## Dataset Structure
The dataset is a json lines file with each line corresponding to a single document and containing document identification, text and metadata.
### Data Instances
[More Information Needed]
### Data Fields
The file contains the following fields:
- case_id - running number for cases
- download_time - when the document was downloaded (datetime)
- number_of_case_documents - number of documents in the current case
- file_name - full name of the document file, including relative path
- Id - document id
- CaseId - case id
- VerdictDt - Date of the document (datetime)
- CreatedDate - Date of when the document was inserted into the Supreme Court database
- CaseNum - case number
- CaseDesc - Unique case identifier. This id is used to reference cases within the Israeli legal system
- Pages - number of pages in the original document
- Path - relative path to the document
- CaseName - formal name of the case
- FileName - document file name, without path
- DocName -document file name, without path
- Year - document creation year
- TypeCode - enumeration of document types (see Type field below)
- Type - Document type
- פסק-דין 84339
- החלטה 663099
- צו ביניים 22
- פסקי דין באנגלית 310
- צו על תנאי 200
- צו 2606
- פד"י 302
- תקצירים 316
- Technical - boolean indicator of whether the document is technical or not.
- CodeVolume - ?
- document_hash - 258-bit hashtag of the document name. Used internally to uniquely identify the document
- text - text of the document. Multiple newlines and other document formating elements (paragraphs,lists, etc.) are preserved.
- html_title - document title extracted from the HTML
- VerdictsDt - date of the verdict
- meta_case_nm - formal case name,
- meta_sec_appeal - integer or None
- meta_side_ty - case type, list of strings
- meta_verdict_file_nm - name of the verdict file
- meta_judge - list of names of the cases judges
- meta_mador_nm - name of the court instance (e.g. בג"ץ)
- meta_side_nm - list of the case parties, list of strings
- meta_verdict_dt - date of the verdict
- meta_case_dt - date of the case
- meta_verdict_nbr -
- meta_ProgId - name of the software used to create the document (None, Word, etc)
- meta_is_technical - whether the document is technical, {'false', 'true'}
- meta_judge_nm_last - last names of the judges (list of strings)
- meta_case_nbr - formal number of the case (same as CaseDesc)
- meta_verdict_ty - type of the decision (same as Type)
- meta_lawyer_nm - list of lawyer names, list of strings or None
- meta_judge_nm_first - list of judges' first names, list of strings
- meta_verdict_pages - number of document cases
- meta_inyan_nm - court בג"ץ
- meta_court_nm - court (e.g. בית המשפט העליון )
### Data Splits
The entire dataset is qualified as 'train'.
## Dataset Creation
2023-04-22
### Curation Rationale
[More Information Needed]
### Source Data
https://supreme.court.gov.il/
#### Initial Data Collection and Normalization
The data was colleted by crawling the Israeli Supreme Court website.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The data contained in this dataset is public.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Prof. Lev Muchnik, Hebrew University of Jerusalem
Dr. Inbal Yahav Shenberger, Tel Aviv University
### Licensing Information
[More Information Needed]
### Citation Information
Lev Muchnik, Inbal Yahav, Ariel Nevo, Avichay Chriqui, Tim Shektov, 2023, The Israeli Supreme Court Dataset
### Contributions
The authours would like to thank the Israeli Innovation Authority (grants #78560 and #78561) for their support in creating of this dataset. |
somosnlp/recetas-cocina | 2023-04-23T00:11:20.000Z | [
"task_categories:table-question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:es",
"license:mit",
"region:us"
] | somosnlp | null | null | null | 1 | 6 | ---
license: mit
task_categories:
- table-question-answering
- text-generation
language:
- es
pretty_name: recetas de cocina
size_categories:
- 10K<n<100K
--- |
Aruno/guanaco_jp | 2023-04-24T03:45:26.000Z | [
"task_categories:text-generation",
"language:ja",
"license:apache-2.0",
"region:us"
] | Aruno | null | null | null | 3 | 6 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- ja
pretty_name: Guanaco Japanese Prompt
---
Japanese Prompt of [GuanacoDataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) extracted using `langdetect`. |
metaeval/logiqa-2.0-nli | 2023-06-22T14:06:42.000Z | [
"task_ids:natural-language-inference",
"language:en",
"license:cc",
"region:us"
] | metaeval | null | null | null | 0 | 6 | ---
license: cc
language:
- en
task_ids:
- natural-language-inference
---
https://github.com/csitfun/LogiQA2.0
Temporary citation:
```
@article{liu2020logiqa,
title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
journal={arXiv preprint arXiv:2007.08124},
year={2020}
}
``` |
Harsit/xnli2.0_train_bengali | 2023-04-24T20:11:06.000Z | [
"region:us"
] | Harsit | null | null | null | 0 | 6 | Entry not found |
amitness/wikipedia_it | 2023-08-14T09:45:05.000Z | [
"language:it",
"region:us"
] | amitness | null | null | null | 0 | 6 | ---
language: it
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4809624134
num_examples: 1808474
download_size: 2865384809
dataset_size: 4809624134
---
# Dataset Card for "wikipedia_it"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-elementary_mathematics-verbal-neg-prepend | 2023-04-27T03:17:34.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 118505
num_examples: 378
download_size: 67618
dataset_size: 118505
---
# Dataset Card for "mmlu-elementary_mathematics-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kz-transformers/multidomain-kazakh-dataset | 2023-05-02T07:19:37.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:kk",
"language:ru",
"license:apache-2.0",
"region:us"
] | kz-transformers | null | null | null | 7 | 6 | ---
license:
- apache-2.0
annotations_creators:
- no-annotation
language_creators:
- found
language:
- kk
- ru
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
pretty_name: MDBKD | Multi-Domain Bilingual Kazakh Dataset
---
# Dataset Description
**Point of Contact:** [Sanzhar Murzakhmetov](mailto:sanzharmrz@gmail.com), [Besultan Sagyndyk](mailto:nuxyjlbka@gmail.com)
### Dataset Summary
MDBKD | Multi-Domain Bilingual Kazakh Dataset is a Kazakh-language dataset containing just over 24 883 808 unique texts from multiple domains.
### Supported Tasks
- 'MLM/CLM': can be used to train a model for casual and masked languange modeling
### Languages
The kk code for Kazakh as generally spoken in the Kazakhstan
### Data Instances
For each instance, there is a string for the text and a string for the id.
```python
{'text': 'Алматыда баспана қымбаттап жатыр Қазақстанда пәтер бағасы түсті Жыл басынан бері баспана бағасы 6,2%-ға қымбаттады Мегополистегі пәтер бағасына шолу. Алматыда пандемия басталғалы баспана қымбаттап барады. Мұның себебі нарықтағы сұраныстың көбеюімен және теңгенің құнсыздануымен байланысты, деп хабарлайды Atameken Business. Арна тілшісі Жания Әбдібек нарық өкілдерімен сұхбаттасып, мегополистегі пәтер бағасына шолу жасады. Толығырақ: Мамыр айында Қазақстанның жеті ірі қаласында пәтер бағасы түскен. Орта есеппен республика бойынша тұрғын үйдің 1 шаршы метрінің бағасы 292 мың 886 теңгені құрайды. '},
'predicted_language': 'kaz',
'contains_kaz_symbols': 1,
'id': '0752b3ce-f5ea-4330-9c5f-e4fecf783b00'}
```
### Data Fields
- `text`: a string containing the content body
- `predicted_language`: a string containing the predicted label of language for the text
- `contains_kaz_symbols`: an integer containing flag of any kazakh symbol in text
- `id`: a string which is a hexidecimal hash for text in split
### Data Splits
The MDBKD has 5 splits: [_cc100-monolingual-crawled-data_](https://data.statmt.org/cc-100/), _kazakhBooks_, [_leipzig_](https://wortschatz.uni-leipzig.de/en/download/Kazakh), [_oscar_](https://oscar-project.github.io/documentation/versions/oscar-2301/) and _kazakhNews_. Below are the statistics of the dataset:
| Dataset Split | Domain | Number of texts in Split | Number of tokens in Split | Number of unique tokens in Split | Median number of tokens in text |
| -------------------------------|----------------------|------------------------------| --------------------------|----------------------------------|---------------------------------|
| cc100-monolingual-crawled-data | Wikipedia articles | 19 635 580 | 441 623 321 | 6 217 337 | 12 |
| kazakhBooks | Books | 8 423 | 351 433 586 | 7 245 720 | 40 264 |
| leipzig | Articles/News | 1 706 485 | 26 494 864 | 1 109 113 | 14 |
| oscar | CommonCrawl | 269 047 | 230 314 378 | 3 863 498 | 431 |
| kazakhNews | News | 3 264 273 | 1 041 698 037 | 5 820 543 | 209 |
With overall stats:
| Stat | Value |
|-------------------------|--------------|
| Number of texts | 24 883 808 |
| Number of tokens |2 091 564 186 |
| Number of unique tokens | 17 802 998 |
Full dataset takes **25GB**
### Annotations
The dataset does not contain any additional annotations.
### Personal and Sensitive Information
Dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.
### Social Impact of Dataset
The purpose of this dataset is to organize open-source datasets in Kazakh language for further research and commercial uses
### Licensing Information
The Multi-Domain Bilingual kazakh dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Contributions
Thanks to [@KindYAK](https://github.com/KindYAK), [@BeksultanSagyndyk](https://github.com/BeksultanSagyndyk), [@SanzharMrz](https://github.com/SanzharMrz) for adding this dataset.
--- |
patriziobellan/PETv11 | 2023-05-01T10:38:03.000Z | [
"region:us"
] | patriziobellan | Abstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, gateways, actors and flow information. We present our new resource, including a detailed overview of the annotation schema and guidelines, as well as a variety of baselines to benchmark the difficulty and challenges of business process extraction from text. | @inproceedings{DBLP:conf/bpm/BellanADGP22,
author = {Patrizio Bellan and
Han van der Aa and
Mauro Dragoni and
Chiara Ghidini and
Simone Paolo Ponzetto},
editor = {Cristina Cabanillas and
Niels Frederik Garmann{-}Johnsen and
Agnes Koschmider},
title = {{PETv11:} An Annotated Dataset for Process Extraction from Natural Language
Text Tasks},
booktitle = {Business Process Management Workshops - {BPM} 2022 International Workshops,
M{\"{u}}nster, Germany, September 11-16, 2022, Revised Selected
Papers},
series = {Lecture Notes in Business Information Processing},
volume = {460},
pages = {315--321},
publisher = {Springer},
year = {2022},
url = {https://doi.org/10.1007/978-3-031-25383-6\_23},
doi = {10.1007/978-3-031-25383-6\_23},
timestamp = {Tue, 14 Feb 2023 09:47:10 +0100},
biburl = {https://dblp.org/rec/conf/bpm/BellanADGP22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{DBLP:conf/aiia/BellanGDPA22,
author = {Patrizio Bellan and
Chiara Ghidini and
Mauro Dragoni and
Simone Paolo Ponzetto and
Han van der Aa},
editor = {Debora Nozza and
Lucia C. Passaro and
Marco Polignano},
title = {Process Extraction from Natural Language Text: the {PETv11} Dataset and
Annotation Guidelines},
booktitle = {Proceedings of the Sixth Workshop on Natural Language for Artificial
Intelligence {(NL4AI} 2022) co-located with 21th International Conference
of the Italian Association for Artificial Intelligence (AI*IA 2022),
Udine, November 30th, 2022},
series = {{CEUR} Workshop Proceedings},
volume = {3287},
pages = {177--191},
publisher = {CEUR-WS.org},
year = {2022},
url = {https://ceur-ws.org/Vol-3287/paper18.pdf},
timestamp = {Fri, 10 Mar 2023 16:23:01 +0100},
biburl = {https://dblp.org/rec/conf/aiia/BellanGDPA22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 6 | ---
dataset_info:
features:
- name: document name
dtype: string
- name: tokens
sequence: string
- name: tokens-IDs
sequence: int8
- name: ner_tags
sequence: string
- name: sentence-IDs
sequence: int8
- name: relations
sequence:
- name: source-head-sentence-ID
dtype: int8
- name: source-head-word-ID
dtype: int8
- name: relation-type
dtype: string
- name: target-head-sentence-ID
dtype: int8
- name: target-head-word-ID
dtype: int8
splits:
- name: test
num_bytes: 203379
num_examples: 45
download_size: 38326
dataset_size: 203379
---
This is the version 1.1.0 of the original PET dataset.
in this version we fixed ``the Performs Relations'' and few minor errors.
Please refer to the original [PET Dataset repository](https://huggingface.co/datasets/patriziobellan/PET) for more info. |
theblackcat102/oasst-red-team | 2023-05-07T09:15:21.000Z | [
"language:en",
"language:de",
"language:fr",
"language:ru",
"language:zh",
"language:ja",
"language:it",
"language:pt",
"language:th",
"language:nl",
"language:ro",
"language:pl",
"language:hu",
"language:hr",
"region:us"
] | theblackcat102 | null | null | null | 0 | 6 | ---
language:
- en
- de
- fr
- ru
- zh
- ja
- it
- pt
- th
- nl
- ro
- pl
- hu
- hr
---
Work in progress
Red team datasets for training and testing reward model for open assistant |
zjkarina/matreshka | 2023-05-13T15:38:52.000Z | [
"task_categories:conversational",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:ru",
"license:cc-by-4.0",
"region:us"
] | zjkarina | null | null | null | 9 | 6 | ---
dataset_info:
features:
- name: role
sequence: string
- name: dialog
sequence: string
- name: persona
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 7320311
num_examples: 6655
- name: validation
num_bytes: 1806432
num_examples: 1664
download_size: 4092810
dataset_size: 9126743
language:
- ru
pretty_name: matreshka
size_categories:
- 1K<n<10K
task_categories:
- conversational
- summarization
- text-generation
license: cc-by-4.0
---
# Dataset Card for "matreshka"

(image generated by Kandinsky-2.1 neural network)
Russian dialogues, the persona of the first interlocutor, and a summary of the dialogue generated by GPT-3.5, starting with the first phrase given in the prompt.
The matreshka dataset is a multi task datasey, you can use it for the task of summarizing a dialogue or generating a dialogue. Contains life dialogues and is also filled with facts about the world. The dataset was going to give the interlocutor a human manner of communication.
After generation, some data contained a format that did not match the request, so we stripped the data with regular expressions. Next, we checked for the correct data type in each line, and changed to the correct format if necessary.
authors' telegram channels: [@nadlskom](https://t.me/nadlskom), [@lovedeathtransformers](https://t.me/lovedeathtransformers) |
Nekofox/ja-zh-twitter-translate | 2023-05-08T13:55:45.000Z | [
"task_categories:translation",
"size_categories:n<1K",
"language:zh",
"language:ja",
"license:mit",
"region:us"
] | Nekofox | null | null | null | 1 | 6 | ---
license: mit
task_categories:
- translation
language:
- zh
- ja
size_categories:
- n<1K
---
translate by @Nekofoxtweet (me)
twitter source from @RindouMikoto |
readerbench/ro-business-emails | 2023-05-18T08:46:58.000Z | [
"license:apache-2.0",
"region:us"
] | readerbench | null | null | null | 0 | 6 | ---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: int64
- name: data
struct:
- name: body
dtype: string
- name: annotation
struct:
- name: choices
list:
- name: name
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 920922
num_examples: 868
- name: val
num_bytes: 273464
num_examples: 289
- name: test
num_bytes: 284370
num_examples: 290
download_size: 739445
dataset_size: 1478756
---
|
NiGuLa/SGDD-TST | 2023-05-12T13:16:58.000Z | [
"task_categories:sentence-similarity",
"language:en",
"license:cc",
"text style transfer",
"arxiv:2206.09676",
"arxiv:1909.05855",
"region:us"
] | NiGuLa | null | null | null | 0 | 6 | ---
language:
- en
pretty_name: Schema-Guided Dialogue Dataset for Text Style Transfer
tags:
- text style transfer
license: cc
task_categories:
- sentence-similarity
viewer: true
---
# Overview
SGDD-TST - [Schema-Guided Dialogue Dataset for Text Style Transfer](https://arxiv.org/abs/2206.09676) is a dataset for evaluating the quality of content similarity measures for text style transfer in the domain of the personal plans. The original texts were obtained from [The Schema-Guided
Dialogue Dataset](https://arxiv.org/pdf/1909.05855.pdf) and were paraphrased by the [T5-based model](https://huggingface.co/ceshine/t5-paraphrase-paws-msrp-opinosis) trained on [GYAFC formality dataset](https://aclanthology.org/N18-1012/). The results were annotated by the crowdsource workers using [Yandex.Toloka](https://toloka.yandex.ru/).
# File description
The file consists of the following columns
- INPUT:text_first - the original text
- INPUT:text_second - formality transferred text
- OUTPUT:result - automatically assigned the label of the annotation (David-Skene aggregation method is used)
- CONFIDENCE:result - confidence of the annotation
- vote_type -
- vote_different - number of votes for the option "The texts are completely different"
- vote_some_details_lost - number of votes for the option "The texts are similar but have significant differences"
- vote_OK - number of votes for the option "The texts mean the same or have minor differences"
- **average - an averaged score of content similarity. This score can be used for evaluating the quality of content similarity measures, e.g. by calculating the Spearman Rank Correlation Coefficient between these scores and automatic scores**
# Contact and Citations
If you have any questions feel free to drop a line to [Nikolay](mailto:bbkhse@gmail.com)
If you find this repository helpful, feel free to cite our publication:
```
@InProceedings{10.1007/978-3-031-08473-7_40,
author="Babakov, Nikolay
and Dale, David
and Logacheva, Varvara
and Krotova, Irina
and Panchenko, Alexander",
editor="Rosso, Paolo
and Basile, Valerio
and Mart{\'i}nez, Raquel
and M{\'e}tais, Elisabeth
and Meziane, Farid",
title="Studying the Role of Named Entities for Content Preservation in Text Style Transfer",
booktitle="Natural Language Processing and Information Systems",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="437--448",
abstract="Text style transfer techniques are gaining popularity in Natural Language Processing, finding various applications such as text detoxification, sentiment, or formality transfer. However, the majority of the existing approaches were tested on such domains as online communications on public platforms, music, or entertainment yet none of them were applied to the domains which are typical for task-oriented production systems, such as personal plans arrangements (e.g. booking of flights or reserving a table in a restaurant). We fill this gap by studying formality transfer in this domain.",
isbn="978-3-031-08473-7"
}
``` |
0x22almostEvil/reasoning-gsm-qna-oa | 2023-05-13T15:43:31.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"QnA",
"math",
"programming",
"region:us"
] | 0x22almostEvil | null | null | null | 2 | 6 | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- QnA
- math
- programming
size_categories:
- 1K<n<10K
---
# Dataset Card for GSM QnA reasoning with ~8.8K entries.
### Dataset Summary
Contains Parquet of a list of instructions and answers.
Each row consists of
* INSTRUCTION
* RESPONSE
* SOURCE
* METADATA (json with language).
### Original Datasets are available here:
* https://huggingface.co/datasets/gsm8k
* https://huggingface.co/datasets/reasoning-machines/gsm-hard |
danielv835/personal_finance_v0.2 | 2023-05-13T21:06:35.000Z | [
"region:us"
] | danielv835 | null | null | null | 11 | 6 | ---
dataset_info:
features:
- name: context
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 105692600
num_examples: 56557
- name: test
num_bytes: 1825911
num_examples: 1000
download_size: 64159306
dataset_size: 107518511
---
# Dataset Card for "personal_finance_v0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
blastwind/github-code-haskell-function | 2023-05-16T05:05:40.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"code",
"haskell",
"region:us"
] | blastwind | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: repo_name
dtype: string
- name: path
dtype: string
- name: license
dtype: string
- name: full_code
dtype: string
- name: full_size
dtype: int64
- name: uncommented_code
dtype: string
- name: uncommented_size
dtype: int64
- name: function_only_code
dtype: string
- name: function_only_size
dtype: int64
- name: is_commented
dtype: bool
- name: is_signatured
dtype: bool
- name: n_ast_errors
dtype: int64
- name: ast_max_depth
dtype: int64
- name: n_whitespaces
dtype: int64
- name: n_ast_nodes
dtype: int64
- name: n_ast_terminals
dtype: int64
- name: n_ast_nonterminals
dtype: int64
- name: loc
dtype: int64
- name: cycloplexity
dtype: int64
splits:
- name: train
num_bytes: 3094608763
num_examples: 3263408
download_size: 1168831903
dataset_size: 3094608763
task_categories:
- text-generation
tags:
- code
- haskell
size_categories:
- 1M<n<10M
---
# Dataset Card for "github-code-haskell-function"
Rows: 3.26M
Download Size: 1.17GB
This dataset is extracted from [github-code-haskell-file](https://huggingface.co/datasets/blastwind/github-code-haskell-file).
Each row has 3 flavors of the same function:
`uncommented_code`: Includes the function and its closest signature.
`function_only_code`: Includes the function only.
`full_code`: Includes the function and its closest [signature](https://wiki.haskell.org/Type_signature) and comment.
The heuristic for finding the closest signature and comment follows: If the immediate previous neighbor of the function
is neither a signature nor comment, `full_code` is just the function. If the previous neighbor is one though, include
them appropriately, then search the previous neighbor for the other node with the same logic.
Further, each row also contains attribute values for my personal analysis project. The attributes are calculated from the code in column `uncommented_code`.
7% (225k) of the rows have cyclomatic complexity and LOC valued at `-1` because [`homplexity`](https://github.com/BlastWind/homplexity) failed in parsing the row's `uncommented_code`.
|
Pranavkpba2000/skin_cancer_dataset | 2023-05-14T08:47:49.000Z | [
"region:us"
] | Pranavkpba2000 | null | null | null | 1 | 6 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AK
'1': BCC
'2': BKL
'3': DF
'4': MEL
'5': NV
'6': SCC
'7': VASC
splits:
- name: train
num_bytes: 9380942753.528
num_examples: 28516
- name: test
num_bytes: 1445202498.285
num_examples: 7105
download_size: 9852696203
dataset_size: 10826145251.813
---
# Dataset Card for "skin_cancer_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AmazonScience/xtr-wiki_qa | 2023-07-24T17:32:38.000Z | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:open-domain-qa",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:extended|wiki_qa",
"l... | AmazonScience | null | null | null | 1 | 6 | ---
annotations_creators:
- machine-generated
language:
- ar
- es
- fr
- de
- hi
- it
- ja
- nl
- pt
language_creators:
- found
license_details: https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/blob/main/LICENSE.md
multilinguality:
- multilingual
- translation
pretty_name: xtr-wiki_qa
size_categories:
- 100K<n<1M
source_datasets:
- extended|wiki_qa
tags:
- as2
- answer sentence selection
- text retrieval
- question answering
task_categories:
- question-answering
- text-retrieval
task_ids:
- open-domain-qa
license: cdla-permissive-2.0
---
# Xtr-WikiQA
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Amazon Science](https://www.amazon.science/publications/cross-lingual-knowledge-distillation-for-answer-sentence-selection-in-low-resource-languages)
- **Paper:** [Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages](https://aclanthology.org/2023.findings-acl.885/)
- **Point of Contact:** [Yoshitomo Matsubara](yomtsub@amazon.com)
### Dataset Summary
***Xtr-WikiQA*** is an Answer Sentence Selection (AS2) dataset in 9 non-English languages, proposed in our paper accepted at ACL 2023 (Findings): [**Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages**](https://aclanthology.org/2023.findings-acl.885/).
This dataset is based on an English AS2 dataset, WikiQA ([Original](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0), [Hugging Face](https://huggingface.co/datasets/wiki_qa)).
For translations, we used [Amazon Translate](https://aws.amazon.com/translate/).
### Languages
- Arabic (ar)
- Spanish (es)
- French (fr)
- German (de)
- Hindi (hi)
- Italian (it)
- Japanese (ja)
- Dutch (nl)
- Portuguese (pt)
File location: [`tsv/`](https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/tree/main/tsv)
## Dataset Structure
### Data Instances
This is an example instance from the Arabic training split of Xtr-WikiQA dataset.
```
{
"QuestionID": "Q1",
"Question": "كيف تتشكل الكهوف الجليدية؟",
"DocumentID": "D1",
"DocumentTitle": "كهف جليدي",
"SentenceID": "D1-0",
"Sentence": "كهف جليدي مغمور جزئيًا على نهر بيريتو مورينو الجليدي.",
"Label": 0
}
```
All the translated instances in tsv files are listed in the same order of the original (native) instances in the WikiQA dataset.
For example, the 2nd instance in [`tsv/ar-train.tsv`](https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/blob/main/tsv/ar-train.tsv) (Arabic-translated from English)
corresponds to the 2nd instance in [`WikiQA-train.tsv`](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0) (English).
### Data Fields
Each instance (a QA pair) consists of the following fields:
- `QuestionID`: Question ID (str)
- `Question`: Question to be answered (str)
- `DocumentID`: Document ID (str)
- `DocumentTitle`: Document title (str)
- `SentenceID`: Answer sentence in the document (str)
- `Sentence`: Answer sentence in the document (str)
- `Label`: Label that indicates the answer sentence correctly answers the question (int, 1: correct, 0: incorrect)
### Data Splits
| | | **#Questions** | | | | **#Sentences** | |
|-------------------|------------:|---------------:|---------:|---|----------:|---------------:|---------:|
| | **train** | **dev** | **test** | | **train** | **dev** | **test** |
| **Each language** | 873 | 126 | 243 | | 8,671 | 1,130 | 2,351 |
See [our paper](#citation-information) for more details about the statistics of the datasets.
## Dataset Creation
### Source Data
The source of Xtr-WikiQA dataset is [WikiQA](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0).
## Additional Information
### Licensing Information
[CDLA-Permissive-2.0](LICENSE.md)
### Citation Information
```bibtex
@inproceedings{gupta2023cross-lingual,
title={{Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages}},
author={Gupta, Shivanshu and Matsubara, Yoshitomo and Chadha, Ankit and Moschitti, Alessandro},
booktitle={Findings of the Association for Computational Linguistics: ACL 2023},
pages={14078--14092},
year={2023}
}
```
### Contributions
- [Shivanshu Gupta](https://huggingface.co/shivanshu)
- [Yoshitomo Matsubara](https://huggingface.co/yoshitomo-matsubara)
- Ankit Chadha
- Alessandro Moschitti |
Soyoung/HistRED | 2023-08-01T15:05:24.000Z | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:ko",
"license:cc-by-nc-nd-4.0",
"art",
"arxiv:2307.04285",
"region:us"
] | Soyoung | null | null | null | 1 | 6 | ---
license: cc-by-nc-nd-4.0
task_categories:
- token-classification
language:
- ko
tags:
- art
size_categories:
- 1K<n<10K
---
This is the official code for **HistRED: A Historical Document-Level Relation Extraction Dataset** (ACL 2023).
All materials related to this paper can be found here.
- [ACL Anthology](https://aclanthology.org/2023.acl-long.180/): Official proceeding publication
- [Virtual-ACL 2023](https://virtual2023.aclweb.org/paper_P536.html#slides): You can view papers, posters, and presentation slides.
- [arXiv](https://arxiv.org/abs/2307.04285): This is the camera-ready version, which is a key part of this paper.
Note that this dataset is open under [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/) license.
The same code (except the dataset) can be seen in [Github](https://github.com/dudrrm/HistRED/tree/main)
```python
from datasets import load_dataset
dataset = load_dataset("Soyoung/HistRED")
```
# Dataset Example
Due to the complexity of the dataset, we replace the dataset preview with an example figure.
The text is translated into English for comprehension (*), however, unlike the figure, the dataset does not include English-translated text, only containing Korean and Hanja.
Also, only one relation is shown for readability.
Relation information includes
1. subject and object entities for Korean and Hanja *(sbj_kor, sbj_han, obj_kor, obj_han)*,
2. a relation type *(label)*,
3. and evidence sentence index(es) for each language *(evidence_kor, evidence_han)*.
Metadata contains additional information, such as which book the text is extracted from.

# Corpus of HistRED: \<\< Yeonhaengnok \>\>
In this dataset, we choose *Yeonhaengnok*, a collection of records originally written in Hanja, classical Chinese writing, which has later been translated into Korean.
[Joseon](https://en.wikipedia.org/wiki/Joseon), the last dynastic kingdom of Korea, lasted just over five centuries, from 1392 to 1897, and many aspects of Korean traditions and customs trace their roots back to this era.
Numerous historical documents exist from the Joseon dynasty, including *Annals of Joseon Dynasty* ([AJD](https://en.wikipedia.org/wiki/Veritable_Records_of_the_Joseon_Dynasty)) and *Diaries of the Royal Secretariats* ([DRS](https://en.wikipedia.org/wiki/Seungjeongwon_ilgi)).
Note that the majority of Joseon's records were written in Hanja, the archaic Chinese writing that differs from modern Chinese because the Korean language had not been standardized until much later.
In short, Yeonhaengnok is a travel diary from the Joseon period. In the past, traveling to other places, particularly to foreign countries, was rare.
Therefore, intellectuals who traveled to Chung (also referred to as the [Qing dynasty](https://en.wikipedia.org/wiki/Qing_dynasty)) meticulously documented their journeys, and Yeonhaengnok is a compilation of these accounts.
Diverse individuals from different generations recorded their business trips following similar routes from Joseon to Chung, focusing on people, products, and events they encountered.
The Institute for the Translation of Korean Classics (ITKC) has open-sourced the original and their translated texts for many historical documents, promoting active historical research.
The entire documents were collected from an open-source database at https://db.itkc.or.kr/.
# Properties
- Our dataset contains (i) named entities, (ii) relations between the entities, and (iii) parallel relationships between Korean and Hanja texts.
- <code style="color : red"> dataset.py </code> return processed dataset that can be easily applied to general NLP models.
- For monolingual setting: *KoreanDataset*, *HanjaDataset*
- For Bilingual setting: *JointDataset*
- <code style="color : red"> ner_map.json </code> and <code style="color : red"> label_map.json </code> are the mapping dictionaries from label classes to indexes.
- Sequence level (SL) is a unit of sequence length for extracting self-contained sub-texts without losing context information for each relation in the text. Each folder SL-k indicates that SL is k.
# Dataset usages
- Testbed for evaluating the model performance when varying the sequence length.
- Relation extraction task especially on Non-English or historical corpus.
# Citation
```
@inproceedings{yang-etal-2023-histred,
title = "{H}ist{RED}: A Historical Document-Level Relation Extraction Dataset",
author = "Yang, Soyoung and
Choi, Minseok and
Cho, Youngwoo and
Choo, Jaegul",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.180",
pages = "3207--3224",
}
```
|
joey234/mmlu-college_biology | 2023-08-23T04:29:43.000Z | [
"region:us"
] | joey234 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: fewshot_context_neg
dtype: string
splits:
- name: dev
num_bytes: 5229
num_examples: 5
- name: test
num_bytes: 588718
num_examples: 144
download_size: 98643
dataset_size: 593947
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-college_biology"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TrainingDataPro/pose_estimation | 2023-09-14T16:47:12.000Z | [
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"region:us"
] | TrainingDataPro | The dataset is primarly intended to dentify and predict the positions of major
joints of a human body in an image. It consists of people's photographs with
body part labeled with keypoints. | @InProceedings{huggingface:dataset,
title = {pose_estimation},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 6 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
language:
- en
tags:
- code
- finance
dataset_info:
features:
- name: image_id
dtype: uint32
- name: image
dtype: image
- name: mask
dtype: image
- name: shapes
dtype: string
splits:
- name: train
num_bytes: 142645152
num_examples: 29
download_size: 137240523
dataset_size: 142645152
---
# Pose Estimation
The dataset is primarly intended to dentify and predict the positions of major joints of a human body in an image. It consists of people's photographs with body part labeled with keypoints.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=pose_estimation) to discuss your requirements, learn about the price and buy the dataset.

# Data Format
Each image from `EP` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the key points. For each point, the x and y coordinates are provided, and there is a `Presumed_Location` attribute, indicating whether the point is presumed or accurately defined.
# Example of XML file structure
.png?generation=1684358333663868&alt=media)
# Labeled body parts
Each keypoint is ordered and corresponds to the concrete part of the body:
0. **Nose**
1. **Neck**
2. **Right shoulder**
3. **Right elbow**
4. **Right wrist**
5. **Left shoulder**
6. **Left elbow**
7. **Left wrist**
8. **Right hip**
9. **Right knee**
10. **Right foot**
11. **Left hip**
12. **Left knee**
13. **Left foot**
14. **Right eye**
15. **Left eye**
16. **Right ear**
17. **Left ear**
# Keypoint annotation is made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=pose_estimation) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
ma2za/many_emotions | 2023-06-10T02:18:01.000Z | [
"task_categories:text-classification",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:dair-ai/emotion",
"source_datasets:daily_dialog",
"source_datasets:go_emotions",
"language:en",
"license:apache-2.0",
"emotion",
"region:us"
] | ma2za | null | null | null | 0 | 6 | ---
license:
apache-2.0
task_categories:
- text-classification
multilinguality:
- multilingual
source_datasets:
- dair-ai/emotion
- daily_dialog
- go_emotions
language:
- en
size_categories:
- 100K<n<1M
tags:
- emotion
---
# Dataset Card for "many_emotions"
## Dataset Description
- **Homepage:**
### Dataset Summary
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
The data fields are:
- `id`: unique identifier
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `anger` (0), `fear` (1), `joy` (2), `love` (
3), `sadness` (4), `surprise` (5), `neutral` (6).
- `license`: inherited license from source dataset
- `dataset`: source dataset
- `language`: text language
### Data Splits
The dataset has 2 configurations:
- raw: with 5 configuration for each language
- split: with configurations train, validation, test
## Dataset Creation
### Curation Rationale
The raw split contains duplicates.
In the split "split" there may be equal rows but with different label.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
## Additional Information
### Licensing Information
Each row has its own license which is inherited from the source dataset. |
ChanceFocus/fiqa-sentiment-classification | 2023-07-16T12:37:51.000Z | [
"language:en",
"license:mit",
"arxiv:2211.00083",
"region:us"
] | ChanceFocus | null | null | null | 3 | 6 | ---
language: en
license: mit
dataset_info:
features:
- name: _id
dtype: string
- name: sentence
dtype: string
- name: target
dtype: string
- name: aspect
dtype: string
- name: score
dtype: float64
- name: type
dtype: string
splits:
- name: train
num_bytes: 119567
num_examples: 822
- name: valid
num_bytes: 17184
num_examples: 117
- name: test
num_bytes: 33728
num_examples: 234
download_size: 102225
dataset_size: 170479
---
# Dataset Name
## Dataset Description
This dataset is based on the task 1 of the Financial Sentiment Analysis in the Wild (FiQA) challenge. It follows the same settings as described in the paper 'A Baseline for Aspect-Based Sentiment Analysis in Financial Microblogs and News'. The dataset is split into three subsets: train, valid, test with sizes 822, 117, 234 respectively.
## Dataset Structure
- `_id`: ID of the data point
- `sentence`: The sentence
- `target`: The target of the sentiment
- `aspect`: The aspect of the sentiment
- `score`: The sentiment score
- `type`: The type of the data point (headline or post)
## Additional Information
- Homepage: [FiQA Challenge](https://sites.google.com/view/fiqa/home)
- Citation: [A Baseline for Aspect-Based Sentiment Analysis in Financial Microblogs and News](https://arxiv.org/pdf/2211.00083.pdf)
## Downloading CSV
```python
from datasets import load_dataset
# Load the dataset from the hub
dataset = load_dataset("ChanceFocus/fiqa-sentiment-classification")
# Save the dataset to a CSV file
dataset["train"].to_csv("train.csv")
dataset["valid"].to_csv("valid.csv")
dataset["test"].to_csv("test.csv")
```
|
Zaid/ashaar_dataset | 2023-05-26T20:54:09.000Z | [
"region:us"
] | Zaid | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: poem title
dtype: string
- name: poem meter
dtype: string
- name: poem verses
sequence: string
- name: poem theme
dtype: string
- name: poem url
dtype: string
- name: poet name
dtype: string
- name: poet description
dtype: string
- name: poet url
dtype: string
- name: poet era
dtype: string
- name: poet location
dtype: string
- name: poem description
list:
- name: attributes
struct:
- name: class
dtype: string
- name: color
dtype: string
- name: dir
dtype: string
- name: face
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: children
list:
- name: attributes
struct:
- name: color
dtype: string
- name: dir
dtype: string
- name: face
dtype: string
- name: href
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: title
dtype: string
- name: value
dtype: string
- name: children
list:
- name: attributes
struct:
- name: class
dtype: string
- name: color
dtype: string
- name: dir
dtype: string
- name: face
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: children
list:
- name: attributes
struct:
- name: align
dtype: string
- name: face
dtype: string
- name: nowrap
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: lang
dtype: string
- name: style
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: dir
dtype: string
- name: face
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: partA
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: class
dtype: string
- name: color
dtype: string
- name: dir
dtype: string
- name: id
dtype: string
- name: lang
dtype: string
- name: style
dtype: string
- name: partA
dtype: string
- name: partB
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: name
dtype: string
- name: parentAttributes
struct:
- name: dir
dtype: string
- name: style
dtype: string
- name: partA
dtype: string
- name: partB
dtype: string
- name: size
dtype: int64
- name: text
dtype: string
- name: truncated
dtype: bool
- name: type
dtype: string
- name: poem language type
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 600307848
num_examples: 136422
download_size: 248952816
dataset_size: 600307848
---
# Dataset Card for "ashaar_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tasksource/winowhy | 2023-05-31T08:23:25.000Z | [
"language:en",
"license:mit",
"region:us"
] | tasksource | null | null | null | 0 | 6 | ---
license: mit
language:
- en
---
https://github.com/HKUST-KnowComp/WinoWhy
```
@inproceedings{zhang2020WinoWhy,
author = {Hongming Zhang and Xinran Zhao and Yangqiu Song},
title = {WinoWhy: A Deep Diagnosis of Essential Commonsense Knowledge for Answering Winograd Schema Challenge},
booktitle = {Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL) 2020},
year = {2020}
}
``` |
MaCoCu/parallel_data | 2023-05-30T23:05:07.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:bs",
"language:bg",
"language:en",
"language:is",
"language:hr",
"language:cnr",
"language:mk",
... | MaCoCu | The MaCoCu parallel dataset is an English-centric collection of 11
parallel corpora including the following languages: Albanian,
Bulgarian, Bosnian, Croatian, Icelandic, Macedonian, Maltese,
Montenegrin, Serbian, Slovenian, and Turkish. These corpora have
been automatically crawled from national and generic top-level
domains (for example, ".hr" for croatian, or ".is" for icelandic);
then, a parallel curation pipeline has been applied to produce
the final data (see https://github.com/bitextor/bitextor). | @inproceedings{banon2022macocu,
title={MaCoCu: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages},
author={Ban{\'o}n, Marta and Espla-Gomis, Miquel and Forcada, Mikel L and Garc{\'\i}a-Romero, Cristian and Kuzman, Taja and Ljube{\v{s}}i{\'c}, Nikola and van Noord, Rik and Sempere, Leopoldo Pla and Ram{\'\i}rez-S{\'a}nchez, Gema and Rupnik, Peter and others},
booktitle={23rd Annual Conference of the European Association for Machine Translation, EAMT 2022},
pages={303--304},
year={2022},
organization={European Association for Machine Translation}
} | null | 0 | 6 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- bs
- bg
- en
- is
- hr
- cnr
- mk
- mt
- sl
- sr
- sq
- tr
license:
- cc0-1.0
multilinguality:
- translation
pretty_name: MaCoCu_parallel
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- translation
task_ids: []
dataset_info:
- config_name: enis
features:
- name: translation
dtype:
translation:
languages:
- is
- en
splits:
- name: train
num_bytes: 133883139
num_examples: 546172
download_size: 133883139
dataset_size: 133883139
- config_name: enbg
features:
- name: translation
dtype:
translation:
languages:
- bg
- en
splits:
- name: train
num_bytes: 133883139
num_examples: 546172
download_size: 133883139
dataset_size: 133883139
---
license: cc0-1.0
---
### Dataset Summary
The collection of MaCoCu parallel corpora have been crawled and consist of pairs of source and target segments (one or several sentences) and additional metadata. The following metadata is included:
- "src_url" and "trg_url": source and target document URL;
- "src_text" and "trg_text": text in non-English language and in English Language;
- "bleualign_score": similarity score as provided by the sentence alignment tool Bleualign (value between 0 and 1);
- "src_deferred_hash" and "trg_deferred_hash": hash identifier for the corresponding segment;
- "src_paragraph_id" and "trg_paragraph_id": identifier of the paragraph where the segment appears in the original document;
- "src_doc_title" and "trg_doc_title": title of the documents from which segments where obtained;
- "src_crawl_date" and "trg_crawl_date": date and time when source and target documents where donwoaded;
- "src_file_type" and "trg_file_type": type of the original documents (usually HTML format);
- "src_boilerplate" and "trg_boilerplate": are source or target segments boilerplates?
- "bifixer_hash": hash identifier for the segment pair;
- "bifixer_score": score that indicates how likely are segments to be correct in their corresponding language;
- "bicleaner_ai_score": score that indicates how likely are segments to be parallel;
- "biroamer_entities_detected": do any of the segments contain personal information?
- "dsi": a DSI class (“dsi”): information whether the segment is connected to any of Digital Service Infrastructure (DSI) classes (e.g., cybersecurity, e-health, e-justice, open-data-portal), defined by the Connecting Europe Facility (https://github.com/RikVN/DSI);
- "translation_direction": translation direction and machine translation identification ("translation-direction"): the source segment in each segment pair was identified by using a probabilistic model (https://github.com/RikVN/TranslationDirection), which also determines if the translation has been produced by a machine-translation system;
- "en_document_level_variant": the language variant of English (British or American, using a lexicon-based English variety classifier - https://pypi.org/project/abclf/) was identified on document and domain level;
- "domain_en": name of the web domain for the English document;
- "en_domain_level_variant": language variant for English at the level of the web domain.
To load a language pair just indicate the dataset and the pair of languages with English first
```python
dataset = load_dataset("MaCoCu/parallel_data", "en-is")
```
|
Meranti/CLAP_freesound | 2023-07-09T17:09:18.000Z | [
"task_categories:audio-classification",
"size_categories:1M<n<10M",
"language:en",
"audio",
"text",
"contrastive learning",
"region:us"
] | Meranti | null | null | null | 0 | 6 | ---
task_categories:
- audio-classification
language:
- en
tags:
- audio
- text
- contrastive learning
pretty_name: freesound
size_categories:
- 1M<n<10M
---
# LAION-Audio-630K Freesound Dataset
[LAION-Audio-630K](https://github.com/LAION-AI/audio-dataset/blob/main/laion-audio-630k/README.md) is the largest audio-text dataset publicly available and a magnitude larger than previous audio-text datasets (by 2022-11-05). Notably, it combines eight distinct datasets, which includes the Freesound dataset.
Specifically, this Hugging face repository contains two versions of Freesound dataset. Details of each dataset (e.g. how captions are made etc.) could be found in the "datacard" column of the table below.
- **Freesound (full)**: The complete Freesound dataset, available at `/freesound` folder.
- **Freesound (no overlap)**: Made based on Freesound(full), with samples from ESC50, FSD50K, Urbansound8K and Clotho removed. available at `/freesound_no_overlap` folder.
As of the structure and format of `freesound` and `freesound_no_overlap` folder, please refer to [this page](https://github.com/LAION-AI/audio-dataset/blob/main/data_preprocess/README.md).
| Name |Duration |Number of Samples |Data Type | Metadata | Data Card |
|--------------------------------------------------|-------------------------|--------------------|--------- |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------- |
| Freesound (no overlap) |2817.31hrs | 460801 |1-2 captions per audio, audio | [website](https://freesound.org/) <br> [csv]()|[data card](/data_card/freesound.md)|
| Freesound (full) |3033.38hrs | 515581 |1-2 captions per audio, audio | [website](https://freesound.org/) <br> [csv]() |[data card](/data_card/freesound.md)|
## Metadata csv file
For each of the two datasets, we provide a metadata csv file including the following columns:
- **audio_filename**: The filename of the audio file in `.tar` files. `exemple: 2394.flac`
- **caption_i**: the i-th caption of the audio file
- **freesound_id**: The freesound id of the audio file.
- **username**: The username of the uploader of the audio file.
- **freesound_url**: The url of the audio file in freesound.org
- **username**: The freesound username of the uploader of the audio file.
- **license**: The license of the audio file. `http://creativecommons.org/licenses/by/3.0/`
## Credits & Licence
- **!!!TERM OF USE!!!**: **By downloading files in this repository, you agree that you will use them <u> for research purposes only </u>. If you want to use Freesound clips in LAION-Audio-630K for commercial purposes, please contact Frederic Font Corbera at frederic.font@upf.edu.**
### Freesound Credit:
All audio clips from Freesound are released under Creative Commons (CC) licenses, while each clip has its own license as defined by the clip uploader in Freesound, some of them requiring attribution to their original authors and some forbidding further commercial reuse. Specifically, here is the statistics about licenses of audio clips involved in LAION-Audio-630K:
| License | Number of Samples |
| :--- | :--- |
| http://creativecommons.org/publicdomain/zero/1.0/ | 260134 |
| https://creativecommons.org/licenses/by/4.0/ | 97090 |
| http://creativecommons.org/licenses/by/3.0/ | 89337 |
| http://creativecommons.org/licenses/by-nc/3.0/ | 31680 |
| https://creativecommons.org/licenses/by-nc/4.0/ | 26736 |
| http://creativecommons.org/licenses/sampling+/1.0/ | 11116 |
## Acknowledgement
The whole collection process as well as all usage of the LAION-Audio-630K are conducted by Germany non-profit pure research organization [LAION](https://laion.ai/). All contributors and collectors of the dataset are considered as open source contributors affiliated to LAION. These community contributors (Discord ids) include but not limited to: @marianna13#7139, @Chr0my#0173, @PiEquals4#1909, @Yuchen Hui#8574, @Antoniooooo#4758, @IYWO#9072, krishna#1648, @dicknascarsixtynine#3885, and @turian#1607. We would like to appreciate all of them for their efforts on the LAION-Audio-630k dataset. |
tasksource/zero-shot-label-nli | 2023-06-23T14:48:53.000Z | [
"task_categories:zero-shot-classification",
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:other",
"region:us"
] | tasksource | null | null | null | 3 | 6 | ---
license: other
task_categories:
- zero-shot-classification
- text-classification
task_ids:
- natural-language-inference
language:
- en
dataset_info:
features:
- name: labels
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 551417533
num_examples: 1090333
- name: validation
num_bytes: 10825569
num_examples: 14419
- name: test
num_bytes: 9738922
num_examples: 14680
download_size: 302498339
dataset_size: 571982024
---
[tasksource](https://github.com/sileod/tasksource) classification tasks recasted as natural language inference.
This dataset is intended to improve label understanding in [zero-shot classification HF pipelines](https://huggingface.co/docs/transformers/main/main_classes/pipelines#transformers.ZeroShotClassificationPipeline
).
Inputs that are text pairs are separated by a newline (\n).
```python
from transformers import pipeline
classifier = pipeline(model="sileod/deberta-v3-base-tasksource-nli")
classifier(
"I have a problem with my iphone that needs to be resolved asap!!",
candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"],
)
```
[deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli) now includes `label-nli` in its training mix (a relatively small portion, to keep the model general, but note that nli models work for label-like zero shot classification without specific supervision (https://aclanthology.org/D19-1404.pdf).
```
@article{sileo2023tasksource,
title={tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation},
author={Sileo, Damien},
year={2023}
}
``` |
d0rj/HC3-ru | 2023-06-05T12:46:32.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:zero-shot-classification",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:Hello-SimpleAI/HC3",
"language... | d0rj | null | null | null | 0 | 6 | ---
task_categories:
- text-classification
- question-answering
- sentence-similarity
- zero-shot-classification
language_creators:
- translated
language:
- ru
multilinguality:
- monolingual
tags:
- ChatGPT
- SimpleAI
- Detection
- OOD
size_categories:
- 10K<n<100K
license: cc-by-sa-4.0
pretty_name: HC3 (ru)
source_datasets:
- Hello-SimpleAI/HC3
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: human_answers
sequence: string
- name: chatgpt_answers
sequence: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 135406074.0
num_examples: 24322
download_size: 62739799
dataset_size: 135406074.0
---
# Dataset Card for "HC3-ru"
This is translated version of [Hello-SimpleAI/HC3 dataset](https://huggingface.co/datasets/Hello-SimpleAI/HC3) into Russian.
## Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
}
``` |
d0rj/hh-rlhf-ru | 2023-06-05T13:53:03.000Z | [
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:Anthropic/hh-rlhf",
"language:ru",
"license:mit",
"human-feedback",
"ChatGPT",
"reward",
"region:us"
] | d0rj | null | null | null | 2 | 6 | ---
language_creators:
- translated
language:
- ru
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
pretty_name: HH for RLHF (ru)
source_datasets:
- Anthropic/hh-rlhf
license: mit
tags:
- human-feedback
- ChatGPT
- reward
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 573845356.0
num_examples: 160800
- name: test
num_bytes: 30792414.0
num_examples: 8552
download_size: 281014419
dataset_size: 604637770.0
---
# Dataset Card for "hh-rlhf-ru"
This is translated version of [Anthropic/hh-rlhf dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) into Russian.
|
daven3/geosignal | 2023-08-28T04:40:53.000Z | [
"task_categories:question-answering",
"license:apache-2.0",
"region:us"
] | daven3 | null | null | null | 4 | 6 | ---
license: apache-2.0
task_categories:
- question-answering
---
## Instruction Tuning: GeoSignal
Scientific domain adaptation has two main steps during instruction tuning.
- Instruction tuning with general instruction-tuning data. Here we use Alpaca-GPT4.
- Instruction tuning with restructured domain knowledge, which we call expertise instruction tuning. For K2, we use knowledge-intensive instruction data, GeoSignal.
***The following is the illustration of the training domain-specific language model recipe:***

- **Adapter Model on [Huggingface](https://huggingface.co/): [daven3/k2_it_adapter](https://huggingface.co/daven3/k2_it_adapter)**
For the design of the GeoSignal, we collect knowledge from various data sources, like:

GeoSignal is designed for knowledge-intensive instruction tuning and used for aligning with experts.
The full-version will be upload soon, or email [daven](mailto:davendw@sjtu.edu.cn) for potential research cooperation.
|
grantprice/DND-NLP | 2023-06-09T23:34:20.000Z | [
"region:us"
] | grantprice | null | null | null | 0 | 6 | Entry not found |
Vinomaly/1k-sample-comex | 2023-06-07T03:42:21.000Z | [
"task_categories:feature-extraction",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:es",
"region:us"
] | Vinomaly | null | null | null | 0 | 6 | ---
task_categories:
- feature-extraction
- text-generation
language:
- es
size_categories:
- 1K<n<10K
--- |
MrbBakh/Twitter_Sentiment | 2023-06-09T12:19:32.000Z | [
"region:us"
] | MrbBakh | null | null | null | 0 | 6 | Entry not found |
deepghs/anime_ch_sex | 2023-06-15T08:45:48.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:mit",
"art",
"region:us"
] | deepghs | null | null | null | 3 | 6 | ---
license: mit
task_categories:
- image-classification
tags:
- art
size_categories:
- 10K<n<100K
--- |
pranjali97/Bias-detection-combined | 2023-06-11T23:48:39.000Z | [
"region:us"
] | pranjali97 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3698636
num_examples: 38213
- name: validation
num_bytes: 414977
num_examples: 4246
download_size: 0
dataset_size: 4113613
---
# Dataset Card for "Bias-detection-combined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RepoFusion/Stack-Repo | 2023-07-10T19:43:46.000Z | [
"license:other",
"arxiv:2206.12839",
"arxiv:2306.10998",
"region:us"
] | RepoFusion | This is the Stack-Repo dataset | @article{shrivastava2023repofusion,
title={RepoFusion: Training Code Models to Understand Your Repository},
author={Shrivastava, Disha and Kocetkov, Denis and de Vries, Harm and Bahdanau, Dzmitry and Scholak, Torsten},
journal={arXiv preprint arXiv:2306.10998},
year={2023}
} | null | 5 | 6 | ---
license: other
---
# Summary of the Dataset
## Description
Stack-Repo is a dataset of 200 Java repositories from GitHub with permissive licenses and near-deduplicated files that are augmented with three types of repository contexts.
- Prompt Proposal (PP) Contexts: These contexts are based on the prompt proposals from the paper [Repository-Level Prompt Generation for Large Language Models of Code](https://arxiv.org/abs/2206.12839).
- BM25 Contexts: These contexts are obtained based on the BM25 similarity scores.
- RandomNN Contexts: These contexts are obtained using the nearest neighbors in the representation space of an embedding model.
For more details, please check our paper [RepoFusion: Training Code Models to Understand Your Repository](https://arxiv.org/abs/2306.10998).
The original Java source files are obtained using a [modified version](https://huggingface.co/datasets/bigcode/the-stack-dedup) of [The Stack](https://huggingface.co/datasets/bigcode/the-stack).
## Data Splits
The dataset consists of three splits: `train`, `validation` and `test`, comprising of 100, 50, and 50 repositories, respectively.
## Data Organization
Each split contains separate folder for a repository where each repository contains all `.java` source code files in the repository in the original directory structure along with three `.json` files corresponding to the PP, BM25 and RandomNN repo contexts. In terms of the HuggingFace Datasets terminology, we have four subdatasets or configurations.
- `PP_contexts`: Propmt Proposal repo contexts.
- `bm25_contexts`: BM25 repo contexts.
- `randomNN_contexts`: RandomNN repo contexts.
- `sources`: actual java (`.java`) source code files
# Dataset Usage
To clone the dataset locally
```
git clone https://huggingface.co/datasets/RepoFusion/Stack-Repo <local_path>
```
To load the dataset desired configuration and split:
```python
import datasets
ds = datasets.load_dataset(
"RepoFusion/Stack-Repo",
name="<configuration_name>",
split="<split_name>"
data_dir="<local_path>"
)
```
NOTE: The configurations for the repo contexts `bm25_contexts`, `PP_contexts` and `randomNN_contexts` can be loaded directly by specifying the corresponding
`<configuration_name>` along with the `<split_name>` in the load_dataset command listed above without cloning the repo locally.
For the `sources` if not cloned beforehand or `data_dir` not specified, `ManualDownloadError` will be raised.
## Data Format
The expected data format of the `.json` files is a list of target holes and corresponding repo contexts where each entry in the `.json` file corresponds to a target hole consisting of the location of the target hole, the target hole as a string, the surrounding context as a string and a list of repo-contexts as strings. Specifically, each row is a dictionary containing
- `id`: hole_id (location of the target hole)
- `question`: surrounding context
- `target`: target hole
- `ctxs`: a list of repo contexts where each item is a dictionary containing
- `title`: name of the repo context
- `text`: content of the repo context
The actual java sources can be accessed via file system directly. The format is like this `[<data_set_root>/data/<split_name>/<github_user>/<repo_name>/<path/to/every/java/file/in/the/repo>.java]`. When accessed through `Datasets.load_dataset`, the data fields for the `sources` can be specified as below.
```python
features = datasets.Features({
'file': datasets.Value('string'),
'content': datasets.Value('string')
})
```
When accessed through `Datasets.load_dataset`, the data fields for the repo contexts can be specified as below.
```python
features = datasets.Features({
'id': datasets.Value('string'),
'hole_file': datasets.Value('string'),
'hole_line': datasets.Value('int32'),
'hole_pos': datasets.Value('int32'),
'question': datasets.Value('string'),
'target': datasets.Value('string'),
'answers': datasets.Sequence(
datasets.Value('string')
),
'ctxs': [{
'title': datasets.Value('string'),
'text': datasets.Value('string'),
'score': datasets.Value('float64')
}]
})
```
# Additional Information
## Dataset Curators
- Disha Shrivastava, dishu.905@gmail.com
- Denis Kocetkov, denis.kocetkov@servicenow.com
## Licensing Information
Stack-Repo is derived from a modified version of The Stack. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the dataset can be found [here](https://huggingface.co/datasets/bigcode/the-stack-dedup/blob/main/licenses.json).
## Citation
```
@article{shrivastava2023repofusion,
title={RepoFusion: Training Code Models to Understand Your Repository},
author={Shrivastava, Disha and Kocetkov, Denis and de Vries, Harm and Bahdanau, Dzmitry and Scholak, Torsten},
journal={arXiv preprint arXiv:2306.10998},
year={2023}
}
```
|
open-source-metrics/preprocessed_pip | 2023-10-03T09:13:41.000Z | [
"region:us"
] | open-source-metrics | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: datasets
dtype: int64
- name: transformers
dtype: int64
- name: pytorch_image_models
dtype: int64
- name: huggingface_hub
dtype: int64
- name: safetensors
dtype: int64
- name: peft
dtype: int64
- name: diffusers
dtype: int64
- name: tokenizers
dtype: int64
- name: gradio
dtype: int64
- name: optimum
dtype: int64
- name: accelerate
dtype: int64
- name: evaluate
dtype: int64
- name: pytorch
dtype: int64
- name: tensorflow
dtype: int64
- name: langchain
dtype: int64
- name: day
dtype: string
splits:
- name: raw
num_bytes: 201140
num_examples: 1483
- name: wow
num_bytes: 28759
num_examples: 212
download_size: 116853
dataset_size: 229899
---
# Dataset Card for "preprocessed_pip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
renumics/cifar100-outlier | 2023-06-30T20:08:26.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-80-Million-Tiny-Images",
"language:en",
"license:unknown",
"region:us"
] | renumics | null | null | null | 0 | 6 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-80-Million-Tiny-Images
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-100
pretty_name: Cifar100
dataset_info:
features:
- name: img
dtype: image
- name: fine_label
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': cra
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
- name: coarse_label
dtype:
class_label:
names:
'0': aquatic_mammals
'1': fish
'2': flowers
'3': food_containers
'4': fruit_and_vegetables
'5': household_electrical_devices
'6': household_furniture
'7': insects
'8': large_carnivores
'9': large_man-made_outdoor_things
'10': large_natural_outdoor_scenes
'11': large_omnivores_and_herbivores
'12': medium_mammals
'13': non-insect_invertebrates
'14': people
'15': reptiles
'16': small_mammals
'17': trees
'18': vehicles_1
'19': vehicles_2
- name: embedding_foundation
sequence: float32
- name: embedding_ft
sequence: float32
- name: outlier_score_ft
dtype: float64
- name: outlier_score_foundation
dtype: float64
- name: nn_image
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
splits:
- name: train
num_bytes: 583557742.0
num_examples: 50000
download_size: 643988234
dataset_size: 583557742.0
---
# Dataset Card for "cifar100-outlier"
📚 This dataset is an enriched version of the [CIFAR-100 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
The workflow is described in the medium article: [Changes of Embeddings during Fine-Tuning of Transformers](https://medium.com/@markus.stoll/changes-of-embeddings-during-fine-tuning-c22aa1615921).
## Explore the Dataset
The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) allows you to explorer this dataset. You can find a Hugging Face Space running Spotlight with this dataset here: <https://huggingface.co/spaces/renumics/cifar100-outlier>.

Or you can explorer it locally:
```python
!pip install renumics-spotlight datasets
from renumics import spotlight
import datasets
ds = datasets.load_dataset("renumics/cifar100-outlier", split="train")
df = ds.rename_columns({"img": "image", "fine_label": "labels"}).to_pandas()
df["label_str"] = df["labels"].apply(lambda x: ds.features["fine_label"].int2str(x))
dtypes = {
"nn_image": spotlight.Image,
"image": spotlight.Image,
"embedding_ft": spotlight.Embedding,
"embedding_foundation": spotlight.Embedding,
}
spotlight.show(
df,
dtype=dtypes,
layout="https://spotlight.renumics.com/resources/layout_pre_post_ft.json",
)
``` |
juniorrios/icomp-dog-breed | 2023-06-15T00:49:19.000Z | [
"region:us"
] | juniorrios | null | null | null | 0 | 6 | Entry not found |
Zilun/RS5M | 2023-08-16T19:00:19.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | Zilun | null | null | null | 4 | 6 | ---
license: cc-by-nc-4.0
---
Use the v4 branch |
hivaze/emphatical_daily_dialogues | 2023-06-19T10:44:54.000Z | [
"region:us"
] | hivaze | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: dialog
sequence: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 23701234
num_examples: 19325
- name: validation
num_bytes: 2413614
num_examples: 2049
download_size: 12219809
dataset_size: 26114848
---
# Dataset Card for "emphatical_daily_dialogues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jondurbin/airoboros-gpt4-1.4 | 2023-06-29T08:24:56.000Z | [
"license:other",
"region:us"
] | jondurbin | null | null | null | 19 | 6 | ---
license: other
---
A continuation (including many fixes) of [gpt4-1.3](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.3), with:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from [rosettacode.org](https://rosettacode.org/) [dataset](https://huggingface.co/datasets/jondurbin/rosettacode-10) thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
_*Note: I did not filter by token length for this dataset, some are well over 2048 so use carefully.*_
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here, and can an open source model really compete in any meaniningful way with gpt-4 quality?
- I am bound by the ToS, but anyone else using the data is not as far as I can tell
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this to train a commercial model, but I will leave that up to you.
I personally don't care how you use this data - it is published to allow others to replicate results, but wouldn't mind some attribution if you do use it. |
priyank-m/MJSynth_text_recognition | 2023-07-04T20:49:10.000Z | [
"task_categories:image-to-text",
"size_categories:1M<n<10M",
"language:en",
"region:us"
] | priyank-m | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 12173747703
num_examples: 7224600
- name: val
num_bytes: 1352108669.283
num_examples: 802733
- name: test
num_bytes: 1484450563.896
num_examples: 891924
download_size: 12115256620
dataset_size: 15010306936.179
task_categories:
- image-to-text
language:
- en
size_categories:
- 1M<n<10M
pretty_name: MJSynth
---
# Dataset Card for "MJSynth_text_recognition"
This is the MJSynth dataset for text recognition on document images, synthetically generated, covering 90K English words.
It includes training, validation and test splits.
Source of the dataset: https://www.robots.ox.ac.uk/~vgg/data/text/
Use dataset streaming functionality to try out the dataset quickly without downloading the entire dataset (refer: https://huggingface.co/docs/datasets/stream)
Citation details provided on the source website (if you use the data please cite):
@InProceedings{Jaderberg14c,
author = "Max Jaderberg and Karen Simonyan and Andrea Vedaldi and Andrew Zisserman",
title = "Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition",
booktitle = "Workshop on Deep Learning, NIPS",
year = "2014",
}
@Article{Jaderberg16,
author = "Max Jaderberg and Karen Simonyan and Andrea Vedaldi and Andrew Zisserman",
title = "Reading Text in the Wild with Convolutional Neural Networks",
journal = "International Journal of Computer Vision",
number = "1",
volume = "116",
pages = "1--20",
month = "jan",
year = "2016",
} |
khushpatel2002/code-messages | 2023-06-23T13:51:57.000Z | [
"license:apache-2.0",
"region:us"
] | khushpatel2002 | null | null | null | 0 | 6 | ---
license: apache-2.0
---
|
causal-lm/baize | 2023-06-24T14:48:19.000Z | [
"region:us"
] | causal-lm | null | null | null | 2 | 6 | Entry not found |
pankajmathur/alpaca_orca | 2023-06-26T14:39:11.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | pankajmathur | null | null | null | 18 | 6 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
Explain tuned Alpaca dataset ~52K created using approaches from Orca Research Paper.
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student models like [orca_mini_13b](https://huggingface.co/psmathur/orca_mini_13b) to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see how the **System** prompt is added before each **instruction**. |
Bin12345/HPC_Fortran_CPP | 2023-07-13T02:04:18.000Z | [
"license:mit",
"region:us"
] | Bin12345 | null | null | null | 3 | 6 | ---
license: mit
---
|
kailasv/ArtWhisperer | 2023-08-29T09:49:29.000Z | [
"license:mit",
"region:us"
] | kailasv | null | null | null | 0 | 6 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: user_id
dtype: string
- name: target_id
dtype: string
- name: target_image
dtype: image
- name: target_positive_prompt
dtype: string
- name: target_negative_prompt
dtype: string
- name: target_image_embedding
sequence:
- name: value
dtype: float32
- name: target_positive_text_embedding
sequence:
- name: value
dtype: float32
- name: target_negative_text_embedding
sequence:
- name: value
dtype: float32
- name: Famous person?
dtype: bool
- name: Famous landmark?
dtype: bool
- name: Manmade?
dtype: bool
- name: People?
dtype: bool
- name: Real image?
dtype: bool
- name: AI image?
dtype: bool
- name: Art?
dtype: bool
- name: Nature?
dtype: bool
- name: City?
dtype: bool
- name: Fantasy?
dtype: bool
- name: Sci-fi or space?
dtype: bool
- name: generated_image
dtype: image
- name: generated_positive_prompt
dtype: string
- name: generated_negative_prompt
dtype: string
- name: generated_image_embedding
sequence:
- name: value
dtype: float32
- name: generated_positive_text_embedding
sequence:
- name: value
dtype: float32
- name: generated_negative_text_embedding
sequence:
- name: value
dtype: float32
- name: ai_model_name
dtype: string
- name: trajectory_index
dtype: int32
- name: score
dtype: int32
- name: human_rating
dtype: float32
- name: time_taken
dtype: duration[s]
- name: filtered_image
dtype: bool
splits:
- name: train
num_bytes: 5743017316.686
num_examples: 51026
- name: validation
num_bytes: 475257048.94
num_examples: 4572
download_size: 2185134483
dataset_size: 6218274365.625999
---
|
ChanceFocus/flare-sm-bigdata | 2023-06-25T18:15:36.000Z | [
"region:us"
] | ChanceFocus | null | null | null | 1 | 6 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 18720287
num_examples: 4897
- name: valid
num_bytes: 1278834
num_examples: 798
- name: test
num_bytes: 2379111
num_examples: 1472
download_size: 11003337
dataset_size: 22378232
---
# Dataset Card for "flare-sm-bigdata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChanceFocus/flare-sm-acl | 2023-06-25T18:16:24.000Z | [
"region:us"
] | ChanceFocus | null | null | null | 1 | 6 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 70385369
num_examples: 20781
- name: valid
num_bytes: 9049127
num_examples: 2555
- name: test
num_bytes: 13359338
num_examples: 3720
download_size: 46311736
dataset_size: 92793834
---
# Dataset Card for "flare-sm-acl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jinmang2/ucf-crime-tencrop-i3d | 2023-06-29T08:37:38.000Z | [
"region:us"
] | jinmang2 | null | null | null | 0 | 6 | Entry not found |
elizathornton/elizabeth_gaskell_unfinished_novel | 2023-09-23T14:00:51.000Z | [
"region:us"
] | elizathornton | null | null | null | 0 | 6 | Entry not found |
TinyPixel/oasst1 | 2023-07-13T12:37:46.000Z | [
"language:en",
"region:us"
] | TinyPixel | null | null | null | 0 | 6 | ---
language: en
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 9202082
num_examples: 8274
download_size: 5256397
dataset_size: 9202082
---
# Dataset Card for "oasst1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HoangHa/good_instructions | 2023-07-03T14:43:52.000Z | [
"license:apache-2.0",
"region:us"
] | HoangHa | null | null | null | 0 | 6 | ---
license: apache-2.0
---
|
jjzha/sayfullina | 2023-09-07T12:13:23.000Z | [
"language:en",
"license:unknown",
"region:us"
] | jjzha | null | null | null | 0 | 6 | ---
license: unknown
language: en
---
This is the soft-skill dataset created by:
```
@inproceedings{sayfullina2018learning,
title={Learning representations for soft skill matching},
author={Sayfullina, Luiza and Malmi, Eric and Kannala, Juho},
booktitle={Analysis of Images, Social Networks and Texts: 7th International Conference, AIST 2018, Moscow, Russia, July 5--7, 2018, Revised Selected Papers 7},
pages={141--152},
year={2018},
organization={Springer}
}
```
There are no document delimiters. Data is split by user `jjzha`.
Number of samples (sentences):
- train: 3705
- dev: 1855
- test: 1851
Sources:
- Adzuna (UK)
Type of tags:
- B-SOFT
- I-SOFT
- O
Sample:
```
{
"idx": 1853,
"tokens": ["and", "sensitive", "when", "deal", "with", "customer", "be", "enthusiastic", "always", "eager", "to", "learn", "and", "develop", "knowledge", "and", "skill"],
"tags_skill": ["O", "O", "O", "O", "O", "O", "O", "B-SOFT", "I-SOFT", "I-SOFT", "I-SOFT", "I-SOFT", "O", "O", "O", "O", "O"]
}
``` |
jjzha/fijo | 2023-09-07T12:59:41.000Z | [
"language:fr",
"license:cc-by-nc-sa-4.0",
"region:us"
] | jjzha | null | null | null | 0 | 6 | ---
license: cc-by-nc-sa-4.0
language: fr
---
This is the skill dataset created by:
```
@article{beauchemin-2022-fijo,
author = {Beauchemin, David and Laumonier, Julien and Ster, Yvan Le and Yassine, Marouane},
journal = {Proceedings of the Canadian Conference on Artificial Intelligence},
year = {2022},
month = {may 27},
note = {https://caiac.pubpub.org/pub/72bhunl6},
publisher = {Canadian Artificial Intelligence Association (CAIAC)},
title = {``{FIJO}'': a {French} {Insurance} {Soft} {Skill} {Detection} {Dataset}},
}
```
There are no document delimiters.
Number of samples (sentences):
- train: 399
- dev: 49
- test: 49
Sources:
- This dataset was collected as part of the multidisciplinary project Femmes face aux défis de la transformation numérique : une étude de cas dans le secteur des assurances (Women Facing the Challenges of Digital Transformation: A Case Study in the Insurance Sector) at Université Laval, funded by the Future Skills Centre. It includes job offers, in French, from insurance companies between 2009 and 2020.
Type of tags:
- BIO tags in `tags_skill` with fine-grained labels:
- PENSEE: thoughts
- RESULTATS: results
- RELATIONNEL: relational
- PERSONNEL: personal
Sample:
```
{
"idx": 47, "tokens": ["-", "Sens", "de", "l\u2019analyse", "\u00e9coute", "et", "minutie", "de", "transcription", "des", "informations", "-", "Professionnalisme", "vu", "le", "recueillement", "d'informations", "souvent", "d\u00e9licates."],
"tags_skill": ["O", "B-PENSEE", "I-PENSEE", "I-PENSEE", "B-RELATIONNEL", "O", "B-PERSONNEL", "I-PERSONNEL", "I-PERSONNEL", "I-PERSONNEL", "I-PERSONNEL", "O", "B-PERSONNEL", "O", "O", "B-RELATIONNEL", "I-RELATIONNEL", "I-RELATIONNEL", "I-RELATIONNEL"]
}
``` |
clu-ling/clupubhealth | 2023-08-02T02:22:46.000Z | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] | clu-ling | null | @inproceedings{kotonya-toni-2020-explainable,
title = "Explainable Automated Fact-Checking for Public Health Claims",
author = "Kotonya, Neema and
Toni, Francesca",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.623",
pages = "7740--7754",
} | null | 0 | 6 | ---
license: apache-2.0
task_categories:
- summarization
language:
- en
tags:
- medical
size_categories:
- 1K<n<10K
- 10K<n<100K
---
# `clupubhealth`
The `CLUPubhealth` dataset is based on the [PUBHEALTH fact-checking dataset](https://github.com/neemakot/Health-Fact-Checking).
The PUBHEALTH dataset contains claims, explanations, and main texts. The explanations function as vetted summaries of the main texts. The CLUPubhealth dataset repurposes these fields into summaries and texts for use in training Summarization models such as Facebook's BART.
There are currently 4 dataset configs which can be called, each has three splits (see Usage):
### `clupubhealth/mini`
This config includes only 200 samples per split. This is mostly used in testing scripts when small sets are desirable.
### `clupubhealth/base`
This is the base dataset which includes the full PUBHEALTH set, sans False samples. The `test` split is a shortened version which includes only 200 samples. This allows for faster eval steps during trianing.
### `clupubhealth/expanded`
Where the base `train` split contains 5,078 data points, this expanded set includes 62,163 data points. ChatGPT was used to generate new versions of the summaries in the base set. After GPT expansion a total of 72,498 were generated, however, this was shortened to ~62k after samples with poor BERTScores were eliminated.
### `clupubhealth/test`
This config has the full `test` split with ~1200 samples. Used for post-training evaluation.
## USAGE
To use the CLUPubhealth dataset use the `datasets` library:
```python
from datasets import load_dataset
data = load_dataset("clu-ling/clupubhealth", "base")
# Where the accepted extensions are the configs: `mini`, `base`, `expanded`, `test`
``` |
Falah/sentiments-dataset-381-classes | 2023-07-05T10:31:19.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | Falah | null | null | null | 1 | 6 | ---
dataset_info:
features:
- name: text
dtype: string
- name: sentiment
dtype: string
splits:
- name: train
num_bytes: 104602
num_examples: 1061
download_size: 48213
dataset_size: 104602
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: sentiments-dataset-381-classes
size_categories:
- 1K<n<10K
---
# Sentiments Dataset (381 Classes)
## Dataset Description
This dataset contains a collection of labeled sentences categorized into 381 different sentiment classes. The dataset provides a wide range of sentiment labels to facilitate fine-grained sentiment analysis tasks. Each sentence is associated with a sentiment class name.
## Dataset Information
- Number of classes: 381
- Features: `text` (string), `sentiment` (string)
- Number of examples: 1,061
## Class Names
The dataset includes the following sentiment class names as examples:
- Positive
- Negative
- Neutral
- Joyful
- Disappointed
- Worried
- Surprised
- Grateful
- Indifferent
- Sad
- Angry
- Relieved
- Sentiment
- Excited
- Hopeful
- Anxious
- Satisfied
- Happy
- Nostalgic
- Inspired
- Impressed
- Amazed
- Touched
- Proud
- Intrigued
- Relaxed
- Content
- Comforted
- Motivated
- Frustrated
- Delighted
- Moved
- Curious
- Fascinated
- Engrossed
- Addicted
- Eager
- Provoked
- Energized
- Controversial
- Significant
- Revolutionary
- Optimistic
- Impactful
- Compelling
- Enchanted
- Peaceful
- Disillusioned
- Thrilled
- Consumed
- Engaged
- Trendy
- Informative
- Appreciative
- Enthralled
- Enthusiastic
- Influenced
- Validated
- Reflective
- Emotional
- Concerned
- Promising
- Empowered
- Memorable
- Transformative
- Inclusive
- Groundbreaking
- Evocative
- Respectful
- Outraged
- Unity
- Enlightening
- Artistic
- Cultural
- Diverse
- Vibrant
- Prideful
- Captivated
- Revealing
- Inspiring
- Admiring
- Empowering
- Connecting
- Challenging
- Symbolic
- Immersed
- Evolving
- Insightful
- Reformative
- Celebratory
- Validating
- Diversity
- Eclectic
- Comprehensive
- Uniting
- Influential
- Honoring
- Transporting
- Resonating
- Chronicle
- Preserving
- Replicated
- Impressive
- Fascinating
- Tributary
- Momentum
- Awe-inspiring
- Unearthing
- Exploratory
- Immersive
- Transportive
- Personal
- Resilient
- Mesmerized
- Legendary
- Awareness
- Evidence-based
- Contemporary
- Connected
- Valuable
- Referencing
- Camaraderie
- Inspirational
- Evoke
- Emotive
- Chronicling
- Educational
- Serene
- Colorful
- Melodious
- Dramatic
- Enlivened
- Wonderstruck
- Enchanting
- Grandiose
- Abundant
- Harmonious
- Captivating
- Mesmerizing
- Dedicated
- Powerful
- Mystical
- Picturesque
- Opulent
- Revitalizing
- Fragrant
- Spellbinding
- Lush
- Breathtaking
- Passionate
- Melodic
- Wonderland
- Invigorating
- Dappled
- Flourishing
- Ethereal
- Elaborate
- Kaleidoscope
- Harmonizing
- Tragic
- Transforming
- Marveling
- Enveloped
- Reverberating
- Sanctuary
- Graceful
- Spectacular
- Golden
- Melancholic
- Transcendent
- Delicate
- Awakening
- Intertwined
- Indelible
- Verdant
- Heartrending
- Fiery
- Inviting
- Majestic
- Lullaby-like
- Kissed
- Behold
- Soulful
- Splendid
- Whispering
- Masterpiece
- Moving
- Crystalline
- Tapestry
- Haunting
- Renewal
- Wisdom-filled
- Stunning
- Sun-kissed
- Symphony
- Awestruck
- Dancing
- Heart-wrenching
- Magical
- Gentle
- Emotion-evoking
- Embracing
- Floating
- Tranquil
- Celestial
- Breathless
- Symphonic
- Stillness
- Delightful
- Flawless
- Commanding
- Embraced
- Heartfelt
- Precise
- Adorned
- Beautiful
- Scattering
- Timeless
- Radiant
- Regal
- Sparkling
- Resilience
- Recognized
- Echoing
- Rebirth
- Cradled
- Tirelessly
- Glowing
- Icy
- Brilliant
- Anticipation
- Awakened
- Blossoming
- Enthralling
- Excitement
- Vivid
- Spellbound
- Mellifluous
- Intricate
- Silent
- Contrasting
- Poignant
- Perfumed
- Pure
- Magnificent
- Exquisite
- Anguished
- Harmonic
- Kaleidoscopic
- Gripping
- Soothing
- Intense
- Poetic
- Fragile
- Unwavering
- Intriguing
- Fairy-tale
- Ephemeral
- Joyous
- Resplendent
- Elegant
- Coaxing
- Illuminating
- Thunderous
- Cool
- Exciting
- Teeming
- Blissful
- Enduring
- Raw
- Adventurous
- Mysterious
- Enrapturing
- Marvelous
- Swirling
- Resonant
- Careful
- Whimsical
- Intertwining
- - and more
## Usage example
```python
from datasets import load_dataset
#Load the dataset
dataset = load_dataset("Falah/sentiments-dataset-381-classes")
#Convert the dataset to a pandas DataFrame
df = pd.DataFrame(dataset['train'])
#Get the unique class names from the "sentiment" column
class_names = df['sentiment'].unique()
#Print the unique class names
for name in class_names:
print(f"Class Name: {name}")
```
## Application
The Sentiments Dataset (381 Classes) can be applied in various NLP applications, such as sentiment analysis and text classification.
## Citation
If you use this dataset in your research or publication, please cite it as follows:
For more information or inquiries about the dataset, please contact the dataset author(s) mentioned in the citation.
```
@dataset{sentiments_dataset_381_classes),
author = {Falah.G.Salieh},
title = {Sentiments Dataset (381 Classes)},
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/Falah/sentiments-dataset-381-classes},
}
``` |
cw1521/ember2018-malware | 2023-07-12T20:29:06.000Z | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"malware",
"virus",
"doi:10.57967/hf/0866",
"region:us"
] | cw1521 | null | null | null | 2 | 6 | ---
task_categories:
- text-classification
pretty_name: EMBER
size_categories:
- 1M<n<10M
tags:
- malware
- virus
---
# EMBER 2018 Malware Analysis Dataset<br>
This dataset contains 1 million records of metadata and vectorized features for malware and benign software.<br>
Visit https://github.com/elastic/ember for more information on the dataset.<br>
## Usage <br>
dataset = load_dataset("cw1521/ember2018-malware", field="data")
<br><br>
x - vectorized features <br>
y - label (0 for benign and 1 for malware) |
jorgeortizfuentes/universal_spanish_chilean_corpus | 2023-07-10T16:14:13.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"annotations_creators:found",
"size_categories:10M<n<100M",
"language:es",
"license:unknown",
"raw_corpora",
"chilean",
"spanish",
"multi-domain",
"multi-genre",
"region:us"
] | jorgeortizfuentes | null | null | null | 1 | 6 | ---
pretty_name: Universal Spanish Chilean Corpus
language:
- es
license: unknown
tags:
- raw_corpora
- chilean
- spanish
- multi-domain
- multi-genre
annotations_creators:
- found
task_categories:
- text-generation
- fill-mask
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype:
class_label:
names:
'0': books
'1': mc4
'2': twitter
'3': news
'4': complaints
splits:
- name: train
num_bytes: 72178078787
num_examples: 37213992
download_size: 43716140329
dataset_size: 72178078787
size_categories:
- 10M<n<100M
---
# Universal Chilean Spanish Corpus
Este dataset se compone de 37_213_992 textos correspondientes a español de Chile y a español multidialectal.
Los textos en español multidialectal provienen del [spanish books](https://huggingface.co/datasets/jorgeortizfuentes/spanish_books).
Los textos en español de Chile vienen de los dominios .cl del [mc4 dataset](https://huggingface.co/datasets/mc4) y de tweets, noticias y reclamos de l [chilean-spanish-corpus](https://huggingface.co/datasets/jorgeortizfuentes/chilean-spanish-corpus)
| Name | Count | Source |
|------------|----------|-----------------------------------------------------------------------------------------------|
| books | 87967 | [spanish books](https://huggingface.co/datasets/jorgeortizfuentes/spanish_books) |
| mc4 | 8706681 | from [mc4 (.cl domains)](https://huggingface.co/datasets/mc4) in [chilean-spanish-corpus](https://huggingface.co/datasets/jorgeortizfuentes/chilean-spanish-corpus) |
| twitter | 27306583 | [chilean-spanish-corpus](https://huggingface.co/datasets/jorgeortizfuentes/chilean-spanish-corpus) |
| news | 1081542 | [chilean-spanish-corpus](https://huggingface.co/datasets/jorgeortizfuentes/chilean-spanish-corpus) |
| complaints | 31219 | [chilean-spanish-corpus](https://huggingface.co/datasets/jorgeortizfuentes/chilean-spanish-corpus) |
Los textos del dataset han sido obtenidos mediante técnicas de web crawling sin distinguir sus derechos de autor. Por lo tanto, pueden tener derechos de autor restrictivos. |
talby/spamassassin | 2023-07-11T18:36:22.000Z | [
"license:unknown",
"region:us"
] | talby | Welcome to the SpamAssassin public mail corpus. This is a selection of mail
messages, suitable for use in testing spam filtering systems. Pertinent
points:
- All headers are reproduced in full. Some address obfuscation has taken
place, and hostnames in some cases have been replaced with
"spamassassin.taint.org" (which has a valid MX record). In most cases
though, the headers appear as they were received.
- All of these messages were posted to public fora, were sent to me in the
knowledge that they may be made public, were sent by me, or originated as
newsletters from public news web sites.
- relying on data from public networked blacklists like DNSBLs, Razor, DCC
or Pyzor for identification of these messages is not recommended, as a
previous downloader of this corpus might have reported them!
- Copyright for the text in the messages remains with the original senders.
OK, now onto the corpus description. It's split into three parts, as follows:
- spam: 500 spam messages, all received from non-spam-trap sources.
- easy_ham: 2500 non-spam messages. These are typically quite easy to
differentiate from spam, since they frequently do not contain any spammish
signatures (like HTML etc).
- hard_ham: 250 non-spam messages which are closer in many respects to
typical spam: use of HTML, unusual HTML markup, coloured text,
"spammish-sounding" phrases etc.
- easy_ham_2: 1400 non-spam messages. A more recent addition to the set.
- spam_2: 1397 spam messages. Again, more recent.
Total count: 6047 messages, with about a 31% spam ratio. | null | null | 0 | 6 | ---
license: unknown
---
# Dataset Card for the SpamAssassin public mail corpus
## Dataset Description
- **Homepage:** https://spamassassin.apache.org/old/publiccorpus/readme.html
### Dataset Summary
This is a selection of mail messages, suitable for use in testing spam filtering systems assembled by members of the SpamAssassin project.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
- The `text` config normalizes all character sets to utf8 and dumps the
MIME tree as a JSON list of lists.
- The `unprocessed` config does not parse messages at all, leaving the
full headers and content as binary.
### Data Fields
- `label`: `spam` or `ham`
- `group`: SpamAssassin has grouped these samples into categories
{'hard_ham', 'spam_2', 'spam', 'easy_ham', 'easy_ham_2'}
- `text`: normalized text of the message bodies
- `raw`: full binary headers and contents of messages
### Data Splits
Only a _train_ split has been provided.
## Dataset Creation
### Curation Rationale
It is hoped this dataset can help verify that modern NLP tools can solve
old NLP problems.
### Source Data
#### Initial Data Collection and Normalization
[The upstream corpus description](https://spamassassin.apache.org/old/publiccorpus/readme.html)
goes into detail on collection methods. The work here to recover text bodies
is largely done with [email.parser](https://docs.python.org/3/library/email.parser.html)
and [ftfy](https://pypi.org/project/ftfy/).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.