text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
false | # Annealing
The [Madelon dataset](https://archive-beta.ics.uci.edu/dataset/171/madelon) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Artificial dataset with continuous input variables.
Highly non-linear classification problem.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| madelon | Binary classification | |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/madelon")["train"]
``` |
false | # Sonar
The [Sonar dataset](https://archive-beta.ics.uci.edu/dataset/151/connectionist+bench+sonar+mines+vs+rocks) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Dataset to discriminate between sonar signals bounced off a metal cylinder and those bounced off a roughly cylindrical rock.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| sonar | Binary classification | Is the sonar detecting a rock? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/sonar")["train"]
``` |
false | # Balance scale
The [Balance scale dataset](https://archive-beta.ics.uci.edu/dataset/12/balance+scale) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Two weights are put on the arms of a scale. Where does the scale tilt?
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| balance | Multiclass classification | Where does the scale tilt? |
| is_balanced | Binary classification | Does the scale tilt? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/balance_scale", "balance")["train"]
```
# Features
Target feature changes according to the selected configuration and is always in last position in the dataset. |
false | # Hill
The [Hill dataset](https://archive.ics.uci.edu/ml/datasets/Hill) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Do the plotted coordinates draw a hill?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------------|
| hill | Binary classification | Do the plotted coordinates draw a hill? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/hill")["train"]
```
# Features
Features are the coordinates of the drawn point. Feature `X{i}` is the `y` coordinate of the point `(i, X{i})`. |
false | # Ionosphere
The [Ionosphere dataset](https://archive.ics.uci.edu/ml/datasets/Ionosphere) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Census dataset including personal characteristic of a person, and their ionosphere threshold.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------------------------------|
| ionosphere | Binary classification | Does the received signal indicate electrons in the ionosphere?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/ionosphere")["train"]
``` |
false | # Musk
The [Musk dataset](https://archive.ics.uci.edu/ml/datasets/Musk) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Census dataset including personal characteristic of a person, and their income threshold.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------|
| musk | Binary classification | Is the molecule a musk?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/musk", "musk")["train"]
``` |
false | # Planning
The [Planning dataset](https://archive.ics.uci.edu/ml/datasets/Planning) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------|
| planning | Binary classification | Is the patient in a planning state?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/planning")["train"]
``` |
false | # Spambase
The [Spambase dataset](https://archive.ics.uci.edu/ml/datasets/Spambase) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Is the given mail spam?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------|
| spambase | Binary classification | Is the mail spam?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/spambase")["train"]
``` |
false | # House16
The [House16 dataset](https://www.openml.org/search?type=data&sort=runs&id=821&status=active) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| house16 | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/house16", "house16")["train"]
``` |
false | # Phoneme
The [Phoneme dataset](https://www.openml.org/search?type=data&sort=runs&id=1489&status=active) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| phoneme | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/phoneme")["train"]
```
|
false | # Post Operative
The [PostOperative dataset](https://archive-beta.ics.uci.edu/dataset/82/post+operative+patient) from the [UCI repository](https://archive-beta.ics.uci.edu/).
Should the patient be discharged from the hospital, go to the ground floor, or to the ICU?
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| post_operative | Multiclass classification.|
| post_operative_binary | Binary classification. | |
false | # Shuttle
The [Shuttle dataset](https://archive-beta.ics.uci.edu/dataset/146/statlog+shuttle+satellite) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| shuttle | Multiclass classification.| |
| shuttle_0 | Binary classification. | Is the image of class 0? |
| shuttle_1 | Binary classification. | Is the image of class 1? |
| shuttle_2 | Binary classification. | Is the image of class 2? |
| shuttle_3 | Binary classification. | Is the image of class 3? |
| shuttle_4 | Binary classification. | Is the image of class 4? |
| shuttle_5 | Binary classification. | Is the image of class 5? |
| shuttle_6 | Binary classification. | Is the image of class 6? | |
false | # GuanacoDataset
**Notice: Effective immediately, the Guanaco and its associated dataset are now licensed under the GPLv3.**
Released weights:
- [KBlueLeaf/guanaco-7B-leh](https://huggingface.co/KBlueLeaf/guanaco-7B-leh) Base Model: comparable to the text-davinci-003 model
- [JosephusCheung/GuanacoLatest](https://huggingface.co/JosephusCheung/GuanacoLatest) Plugin LoRA: optimized for application scenarios of ChatLLM like gpt-3.5-turbo
The dataset for the [Guanaco model](https://guanaco-model.github.io/) is designed to enhance the multilingual capabilities and address various linguistic tasks. It builds upon the 175 tasks from the Alpaca model by providing rewrites of seed tasks in different languages and adding new tasks specifically designed for English grammar analysis, natural language understanding, cross-lingual self-awareness, and explicit content recognition. The dataset comprises a total of 534,530 entries, generated at a low cost of $6K.
- Free chat dialogues without System input: 32,880 entries (recent update) - in English zh-Hans zh-Hant-TW Japanese Deutsch
*To test 0-shot tasks of Japanese & Deutsch on original 175 tasks with finetuning on chat only.*
- Chat dialogues with System input: 16,087 entries (recent update) - in English zh-Hans zh-Hant-TW zh-Hant-HK
**A new additional dataset is released, separated and larger dataset is available for different languages.**
The original 175 tasks were translated into 4 versions and regenerated independently:
Below is the details of **mixed data**:
- Japanese (Ja-JP - recently updated) 7,485 entries
- Simplified Chinese (zh-Hans): 27,808 entries
- Traditional Chinese (Taiwan) (zh-Hant-TW): 21,481 entries
- Traditional Chinese (Hong Kong) (zh-Hant-HK): 19247 entries
- English: 20K+ entries, not from Alpaca
Besides, a mini version of 52K multi-lang dataset is released with:
- Japanese (Ja-JP - recently updated) 7,485 entries
- Simplified Chinese (zh-Hans): 5,439 entries
- Traditional Chinese (Taiwan) (zh-Hant-TW): 9,322 entries
- Traditional Chinese (Hong Kong) (zh-Hant-HK): 9,954 entries
- English: 20,024 entries, not from Alpaca
The mini version is included in the full non-chat dataset.
**Additional dataset** *separated by language (temporary)*:
*This additional dataset should only be used for additional training if using mixed data did not yield good results. Using it directly will not produce good results.*
This part of the data will be merged into the main dataset at the appropriate time.
- Chinese: 117,166 entries
- Simplified Chinese (zh-Hans): 92,530 entries
- Traditional Chinese (Taiwan) (zh-Hant-TW): 14,802 entries
- Traditional Chinese (Hong Kong) (zh-Hant-HK): 9,834 entries
- Japanese (Ja-JP - recently updated) 60,772 entries
In addition to the language-specific tasks, the dataset includes new tasks that aim to improve the model's performance in English grammar analysis, natural language understanding, cross-lingual self-awareness, and explicit content recognition. These new tasks ensure that the Guanaco model is well-rounded and capable of handling a wide range of challenges in the field of natural language processing.
By incorporating this diverse and comprehensive dataset into the Guanaco model, we aim to provide researchers and academics with a powerful tool for studying instruction-following language models in a multilingual context. The dataset's design encourages the development of more robust and versatile models capable of addressing complex linguistic tasks across different languages and domains.
**Additional dataset** *Paper/General-QA*:
The Paper/General-QA dataset is a collection of questions and answers constructed for AI-generated papers or general texts in English, Chinese, Japanese, and German. The question dataset contains 106,707 questions, and the answer dataset contains 99,292 answers. The purpose of this dataset is to generate paragraph-level answers to questions posed about lengthy documents such as PDFs. Similar questions are combined to form a tree-like structure, and graph theory algorithms are used to process user questions, content summaries, and contextual logic.
*It is worth noting that some ChatGPT applications claim to be able to read PDFs, but they do not actually read the entire article. Instead, they compare the user's input question with segmented paragraphs of the article, select the most similar paragraph, and insert it as the answer. This is not true language model reading, but rather a form of deception.*
**Note: I intentionally mixed up entries and languages to prevent anyone from solely selecting certain language entries for finetuning. This is not only unhelpful for the community, but also because some tasks are 0-shot in specific languages, please use the complete dataset for finetuning.**
## To-Do List:
- Expand language support in the dataset:
Incorporate additional languages such as Japanese, German, and more into the dataset. This expansion should include task examples that cover advanced grammar analysis and dialogue understanding for these languages.
- Create a dialogue-oriented Chatbot dataset:
Develop a dataset specifically designed for conversation-based applications, containing examples that facilitate the model's ability to engage in interactive and dynamic dialogues with users.
- Add Toolformer-supporting tasks:
Introduce tasks that train the model to autonomously call external APIs using Toolformer, allowing the model to access and utilize various web services and data sources, thereby enhancing its problem-solving capabilities.
- Develop tasks for rapid integration of external knowledge:
Design tasks that encourage the model to quickly incorporate knowledge from external sources such as search engines and artificial intelligence knowledge engines. These tasks would be particularly beneficial for smaller models with limited knowledge reserves, enabling them to efficiently utilize external information to respond to user queries. |
false | # Ipums
The [Ipums dataset](https://archive-beta.ics.uci.edu/dataset/127/ipums+census+database) from the [UCI repository](https://archive-beta.ics.uci.edu/).
|
false | # Pums
The [Pums dataset](https://archive-beta.ics.uci.edu/dataset/116/us+census+data+1990) from the [UCI repository](https://archive-beta.ics.uci.edu/).
U.S.A. Census dataset, classify the income of the individual.
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| pums | Binary classification.| |
true |
## Dataset Summary
In [CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval](https://ai-muzic.github.io/clamp/), we introduce WikiMusicText (WikiMT), a new dataset for the evaluation of semantic search and music classification. It includes 1010 lead sheets in ABC notation sourced from Wikifonia.org, each accompanied by a title, artist, genre, and description. The title and artist information is extracted from the score, whereas the genre labels are obtained by matching keywords from the Wikipedia entries and assigned to one of the 8 classes (Jazz, Country, Folk, R&B, Pop, Rock, Dance, and Latin) that loosely mimic the GTZAN genres. The description is obtained by utilizing BART-large to summarize and clean the corresponding Wikipedia entry. Additionally, the natural language information within the ABC notation is removed.
WikiMT is a unique resource to support the evaluation of semantic search and music classification. However, it is important to acknowledge that the dataset was curated from publicly available sources, and there may be limitations concerning the accuracy and completeness of the genre and description information. Further research is needed to explore the potential biases and limitations of the dataset and to develop strategies to address them. Therefore, to support additional investigations, we also provide the [source files](https://github.com/microsoft/muzic/blob/main/clamp/wikimusictext/source_files.zip) of WikiMT, including the MusicXML files from Wikifonia and the original entries from Wikipedia.
## Copyright Disclaimer
WikiMT was curated from publicly available sources and is believed to be in the public domain. However, it is important to acknowledge that copyright issues cannot be entirely ruled out. Therefore, users of the dataset should exercise caution when using it. The authors of WikiMT do not assume any legal responsibility for the use of the dataset. If you have any questions or concerns regarding the dataset's copyright status, please contact the authors at shangda@mail.ccom.edu.cn.
## BibTeX entry and citation info
```
@misc{wu2023clamp,
title={CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval},
author={Shangda Wu and Dingyao Yu and Xu Tan and Maosong Sun},
year={2023},
eprint={2304.11029},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
``` |
false | Датасет dialogsum переведенный на русский язык. Глюки перевода устранены автоматической чисткой |
false | license: cc0-1.0
---
### Dataset Summary
The collection of MaCoCu parallel corpora have been crawled and consist of pairs of source and target segments (one or several sentences) and additional metadata. The following metadata is included:
- "src_url" and "trg_url": source and target document URL;
- "src_text" and "trg_text": text in non-English language and in English Language;
- "bleualign_score": similarity score as provided by the sentence alignment tool Bleualign (value between 0 and 1);
- "src_deferred_hash" and "trg_deferred_hash": hash identifier for the corresponding segment;
- "src_paragraph_id" and "trg_paragraph_id": identifier of the paragraph where the segment appears in the original document;
- "src_doc_title" and "trg_doc_title": title of the documents from which segments where obtained;
- "src_crawl_date" and "trg_crawl_date": date and time when source and target documents where donwoaded;
- "src_file_type" and "trg_file_type": type of the original documents (usually HTML format);
- "src_boilerplate" and "trg_boilerplate": are source or target segments boilerplates?
- "bifixer_hash": hash identifier for the segment pair;
- "bifixer_score": score that indicates how likely are segments to be correct in their corresponding language;
- "bicleaner_ai_score": score that indicates how likely are segments to be parallel;
- "biroamer_entities_detected": do any of the segments contain personal information?
- "dsi": a DSI class (“dsi”): information whether the segment is connected to any of Digital Service Infrastructure (DSI) classes (e.g., cybersecurity, e-health, e-justice, open-data-portal), defined by the Connecting Europe Facility (https://github.com/RikVN/DSI);
- "translation_direction": translation direction and machine translation identification ("translation-direction"): the source segment in each segment pair was identified by using a probabilistic model (https://github.com/RikVN/TranslationDirection), which also determines if the translation has been produced by a machine-translation system;
- "en_document_level_variant": the language variant of English (British or American, using a lexicon-based English variety classifier - https://pypi.org/project/abclf/) was identified on document and domain level;
- "domain_en": name of the web domain for the English document;
- "en_domain_level_variant": language variant for English at the level of the web domain.
To load a language pair just indicate the dataset and the pair of languages with English first
```python
dataset = load_dataset("MaCoCu/parallel_data", "en-is")
```
|
false | All of the data together is around 41GB. It's the last hidden states of 131,072 samples from refinedweb padded/truncated to 512 tokens on the left, fed through [google/flan-t5-small](https://hf.co/google/flan-t5-small).
Structure:
```
{
"encoding": List, shaped (512, 512) aka (tokens, d_model),
"text": String, the original text that was encoded,
"attention_mask": List, binary mask to pass to your model with encoding to not attend to pad tokens
}
```
just a tip, you cannot load this with the RAM in the free ver of google colab, not even a single file, streaming won't work either. I have 80gb RAM and it was barely enough to work with streaming. |
false | # Dataset Card for "cnn_dailymail_azure_pt_pt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true |
# Dataset Card for Novelupdates Webnovels
### Dataset Summary
This dataset contains information about webnovels from Novelupdates, a popular webnovel platform. It includes details such as novel ID, URL, title, associated names, cover image URL, show type, genres, tags, description, related series, recommendations, recommendation lists, rating, language, authors, artists, year, status, licensing information, translation status, publishers, release frequency, rankings, total reading list rank, and chapters.
### Supported Tasks and Leaderboards
The dataset can be used for various tasks such as text classification, zero-shot classification, and feature extraction. It currently does not have an established leaderboard.
### Languages
The dataset is primarily in English.
## Dataset Structure
### Data Instances
The dataset contains 14,713 data instances.
### Data Fields
The dataset includes the following fields:
- novel_id: integer
- url: string
- title: string
- associated_names: list of strings
- img_url: string
- showtype: string
- genres: list of strings
- tags: list of strings
- description: string
- related_series: struct
- related_series: list of structs
- title: string
- url: string
- total: integer
- recommendations: struct
- recommendations: list of structs
- recommended_user_count: integer
- title: string
- url: string
- total: integer
- recommendation_lists: struct
- list: list of structs
- title: string
- url: string
- total: integer
- rating: string
- language: string
- authors: list of strings
- artists: list of strings
- year: string
- status_coo: string
- licensed: string
- translated: string
- publishers: list of strings
- en_pubs: list of strings
- release_frequency: string
- weekly_rank: string
- monthly_rank: string
- all_time_rank: string
- monthly_rank_reading_list: string
- all_time_rank_reading_list: string
- total_reading_list_rank: string
- chapters: struct
- chapters: list of structs
- title: string
- url: string
- total: integer
### Data Splits
The dataset includes a single split:
- Train: 11.8K examples
- Test: 2.94K examples
## Dataset Creation
### Curation Rationale
The dataset was curated to provide a comprehensive collection of webnovel information from Novelupdates for various text analysis tasks.
### Source Data
#### Initial Data Collection and Normalization
The initial data was collected from the Novelupdates website and normalized for consistency and structure.
#### Who are the source language producers?
The source language producers are the authors and publishers of the webnovels.
### Annotations
#### Annotation process
The dataset does not contain explicit annotations. It consists of the information available on the Novelupdates website.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
The dataset does not include any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
We claim no ownership of this repo. This dataset is derived from `togethercomputer/RedPajama-Data-1T`.
We removed CommonCrawl and C4 from the original RedPajama dataset
### Getting Started
The dataset consists of 2084 jsonl files.
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("chandan047/RedPajama-Data-1T-no-cc-c4")
```
Or you can directly download the files using the following command:
```
wget 'https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt'
while read line; do
dload_loc=${line#https://data.together.xyz/redpajama-data-1T/v1.0.0/}
mkdir -p $(dirname $dload_loc)
wget "$line" -O "$dload_loc"
done < urls.txt
```
A smaller 1B-token sample of the dataset can be found [here](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
### Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
| Dataset | Token Count |
|---------------|-------------|
| GitHub | 59 Billion |
| Books | 26 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 157 Billion |
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```json
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...},
"red_pajama_subset": "common_crawl" | "c4" | "github" | "books" | "arxiv" | "wikipedia" | "stackexchange"
}
```
## Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
### Source Data
#### Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
#### C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Books3
The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
simhash to remove near duplicates.
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
#### Stackexchange
The Stack Exchange split of the dataset is download from the
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
remove html tags, group the posts into question-answer pairs, and order answers by their score.
### SHA256 Checksums
SHA256 checksums for the dataset files for each data source are available here:
```
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/arxiv_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/book_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/c4_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/common_crawl_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/github_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/stackexchange_SHA256SUMS.txt
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/wikipedia_SHA256SUMS.txt
```
To cite RedPajama, please use:
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
### License
Please refer to the licenses of the data subsets you use.
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
* [C4 license](https://huggingface.co/datasets/allenai/c4#license)
* GitHub was limited to MIT, BSD, or Apache licenses only
* Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
<!--
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |
false |
# Dataset Card for PersiNLU (Reading Comprehension)
## Table of Contents
- [Dataset Card for PersiNLU (Reading Comprehension)](#dataset-card-for-persi_nlu_reading_comprehension)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```
{
'question': 'پیامبر در چه سالی به پیامبری رسید؟',
'url': 'https://fa.wikipedia.org/wiki/%D9%85%D8%AD%D9%85%D8%AF',
'passage': 'محمد که از روش زندگی مردم مکه ناخشنود بود، گهگاه در غار حرا در یکی از کوه\u200cهای اطراف آن دیار به تفکر و عبادت می\u200cپرداخت. به باور مسلمانان، محمد در همین مکان و در حدود ۴۰ سالگی از طرف خدا به پیامبری برگزیده، و وحی بر او فروفرستاده شد. در نظر آنان، دعوت محمد همانند دعوت دیگر پیامبرانِ کیش یکتاپرستی مبنی بر این بود که خداوند (الله) یکتاست و تسلیم شدن برابر خدا راه رسیدن به اوست.',
'answers': [
{'answer_start': 160, 'answer_text': 'حدود ۴۰ سالگی'}
]
}
```
### Data Fields
- `question`: the question, mined using Google auto-complete.
- `passage`: the passage that contains the answer.
- `url`: the url from which the passage was mined.
- `answers`: a list of answers, containing the string and the index of the answer.
### Data Splits
The train/test split contains 600/575 samples.
## Dataset Creation
### Curation Rationale
The question were collected via Google auto-complete.
The answers were annotated by native speakers.
For more details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
false |
# Dataset Card for DiaBLa: Bilingual dialogue parallel evaluation set
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [almanach.inria.fr/software_and_resources/custom/DiaBLa-en.html](http://almanach.inria.fr/software_and_resources/custom/DiaBLa-en.html)
- **Repository:** [github.com/rbawden/DiaBLa-dataset](https://github.com/rbawden/DiaBLa-dataset)
- **Paper:** [Bawden et al. (2021). DiaBLa: A Corpus of Bilingual Spontaneous Written Dialogues for Machine Translation. Language Resources and Evaluation(55). Pages 635–660. Springer Verlag. 10.1007/s10579-020-09514-4.](https://hal.inria.fr/hal-03021633)
- **Point of contact:** rachel.bawden[at]inria.fr
### Dataset Summary
The dataset is an English-French dataset for the evaluation of Machine Translation (MT) for informal, written bilingual dialogue.
The dataset contains 144 spontaneous dialogues (5,700+ sentences) between native English and French speakers, mediated by one of two neural MT systems in a range of role-play settings. See below for some basic statistics. The dialogues are accompanied by fine-grained sentence-level judgments of MT quality, produced by the dialogue participants themselves, as well as by manually normalised versions and reference translations produced a posteriori. See here for information about evaluation.
The motivation for the corpus is two-fold: to provide:
- a unique resource for evaluating MT models for dialogue (i.e. in context)
- a corpus for the analysis of MT-mediated communication
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (mainly UK) and French
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 37 MB
- **Number of parallel utterances:** 5748
Each example is highly annotated and is associated with dialogue context. An example from the test set looks as follows (only the first and last utterances of the dialogue history are shown for readability purposes):
```
{
"id": "dialogue-2018-04-25T16-20-36.087170_french_english_1_2_25",
"mt": "Tu m'en veux pour \u00e7a ?",
"norm": "",
"orig": "Are you blaming me for this?",
"ref": "C'est moi que vous critiquez pour \u00e7a\u00a0?",
"utterance_meta": {
"eval_judgment": "medium",
"eval_verbatim": "",
"eval_problems": [
"coherence"
],
"lang": "english"
},
"dialogue_meta": {
"start_time": "2018-04-25T16:20:36.087170",
"end_time": "",
"translation_model": "baseline",
"final_evaluation_user1": {
"style": "average",
"coherence": "average",
"grammaticality": "good",
"meaning": "average",
"word_choice": "average"
},
"final_evaluation_user2": {
"style": "",
"coherence": "",
"grammaticality": "",
"meaning": "",
"word_choice": ""
},
"scenario": [
[
"You are both stuck in a lift at work.",
"Vous \u00eates tous les deux bloqu\u00e9(e)s dans un ascenseur au travail."
],
[
"You are an employee and you are with your boss.",
"Vous \u00eates un(e) employ\u00e9(e) et vous \u00eates avez votre patron(ne)"
],
[
"You are the boss and are with an employee.",
"Vous \u00eates le ou la patron(ne) et vous \u00eates avec un(e) employ\u00e9(e)"
]
],
"user1": {
"role_num": 1,
"role": [
"You are an employee and you are with your boss.",
"Vous \u00eates un(e) employ\u00e9(e) et vous \u00eates avez votre patron(ne)"
],
"initiated_dialogue": true,
"turn_number": 2,
"lang": "french"
},
"user2": {
"role_num": 2,
"role": [
"You are the boss and are with an employee.",
"Vous \u00eates le ou la patron(ne) et vous \u00eates avec un(e) employ\u00e9(e)"
],
"initiated_dialogue": false,
"turn_number": 1,
"lang": "english"
}
},
"dialogue_history": [
{
"id": "dialogue-2018-04-25T16-20-36.087170_french_english_1_2_0",
"orig": "We appear to have stopped moving.",
"norm": "",
"mt": "On semble avoir arr\u00eat\u00e9 de bouger.",
"ref": "J'ai l'impression qu'on s'est arr\u00eat\u00e9s.",
"utterance_meta": {
"eval_judgment": "medium",
"eval_verbatim": "",
"eval_problems": [
"style"
],
"lang": "english"
}
},
[...]
{
"id": "dialogue-2018-04-25T16-20-36.087170_french_english_1_2_24",
"orig": "La sonnerie s'est arr\u00eat\u00e9, je pense que personne ne va nous r\u00e9pondre.",
"norm": "",
"mt": "The ringing stopped, and I don't think anyone's gonna answer us.",
"ref": "It stopped ringing. I don't think anybody's going to reply.",
"utterance_meta": {
"eval_judgment": "perfect",
"eval_verbatim": "",
"eval_problems": [],
"lang": "french"
}
}
]
}
```
### Data Fields
#### plain_text
- `id`: a `string` feature.
- `orig`: a `string` feature.
- `norm`: a `string` feature.
- `mt`: a `string` feature.
- `ref`: a `string` feature.
- `utterance_meta`: a dictionary feature containing:
- `eval_judgment`: a `string` feature.
- `eval_verbatim`: a `string` feature.
- `eval_problems`: a list feature containing:
- up to 5 `string` features.
- `lang`: a `string` feature.
- `dialogue_meta`: a dictionary feature containing:
- `start_time` : a `string` feature.
- `end_time`: a `string` feature.
- `translation_model`: a `string` feature.
- `final_evaluation_user1`: a dictionary feature containing:
- `style`: a `string` feature.
- `coherence`: a `string` feature.
- `grammaticality`: a `string` feature.
- `meaning`: a `string` feature.
- `word_choice`: a `string` feature.
- `final_evaluation_user2`: a dictionary feature containing:
- `style`: a `string` feature.
- `coherence`: a `string` feature.
- `grammaticality`: a `string` feature.
- `meaning`: a `string` feature.
- `word_choice`: a `string` feature.
- `scenario`: a list feature containing
- 3 lists each containing 2 `string` features.
- `user1`: a dictionary feature containing:
- `role_num`: an `int` feature.
- `role`: a list feature containing:
- 2 `string` features.
- `initiated_dialogue`: a `bool` feature.
- `turn_number`: an `int` value.
- `lang`: a `string` value.
- `user2`: a dictionary feature containing:
- `role_num`: an `int` feature.
- `role`: a list feature containing:
- 2 `string` features.
- `initiated_dialogue`: a `bool` feature.
- `turn_number`: an `int` value.
- `lang`: a `string` value.
- `dialogue_history`: a list feature containing:
- dictionary features containing:
- `id`: a `string` feature.
- `orig`: a `string` feature.
- `norm`: a `string` feature.
- `mt`: a `string` feature.
- `ref`: a `string` feature.
- `utterance_meta`: a dictionary feature containing:
- `eval_judgment`: a `string` feature.
- `eval_verbatim`: a `string` feature.
- `eval_problems`: a list feature containing:
- up to 5 `string` features.
- `lang`: a `string` feature.
### Data Splits
DiaBLa is a test set only.
| name |test |
|----------|------:|
|plain_text| 5748|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Original data was collected through a [dedicated online chat platform](https://github.com/rbawden/diabla-chat-interface) and involved native speakers of English and of French. As well as producing the original text, participants also annotated the quality of the machine-translated outputs of their partners' utterances (which they saw instead of their partners' original text) based on their monolingual intuitions and the dialogue context.
Each dialogue is assigned one of 12 role-play scenarios and where appropriate each participant is assigned a role to play in the dialogue.
#### Who are the source language producers?
The source text producers were native French and native English volunteers (mainly British English). See the paper for very basic information concerning their backgrounds (age categories and experience in NLP).
### Annotations
#### Annotation process
On top of the original dialogue text (a mixture of utterances in English and in French), the following "annotations" are provided:
- machine translated version of the original text (produced in real time and presented during the dialogue), produced by one of two MT systems, both trained using [Marian](https://github.com/marian-nmt/marian).
- judgments of MT quality by participants (overall quality, particular problems, verbatim comments)
- manually produced normalised version of the original text (for spelling mistakes, grammatical errors, missing punctuation, etc.)
- manually produced reference translations
#### Who are the annotators?
The judgments of MT quality were produced by the dialogue participants themselves in real time. The normalised version of the text and the reference translations were manually produced by the authors of the paper. Translations were always done into the translator's native language and all translations were verified and post-edited by a bilingual English-French speaker.
### Personal and Sensitive Information
A priori the dataset does not contain personal and sensitive information. Participants were instructed not to give any personal information and to assume the roles assigned in the role play scenario. Usernames were anonymised prior to distribution and any mention of either usernames or real names in the dialogues were replaced by generic names of the same gender as the participant. Only basic user information was collected to get an idea of the distribution of participants and to potentially see how multilingual ability influences quality judgments (rough age categories, experience in NLP or research, native languages, familiarity with the other language (either English or French), other languages spoken and gender). Gender was included because it is an important factor in translation (particularly for the direction English-to-French), and this was explained in advance to the participants in the FAQs.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was collected by Rachel Bawden, Eric Bilinski, Thomas Lavergne and Sophie Rosset (see citation below).
### Licensing Information
The dataset is available under a CC BY-SA 4.0 licence.
### Citation Information
If you use or are inspired by this dataset, please cite:
```
@article{bawden_DiaBLa:-A-Corpus-of_2021,
author = {Bawden, Rachel and Bilinski, Eric and Lavergne, Thomas and Rosset, Sophie},
doi = {10.1007/s10579-020-09514-4},
title = {DiaBLa: A Corpus of Bilingual Spontaneous Written Dialogues for Machine Translation},
year = {2021},
journal = {Language Resources and Evaluation},
publisher = {Springer Verlag},
volume = {55},
pages = {635--660},
url = {https://hal.inria.fr/hal-03021633},
pdf = {https://hal.inria.fr/hal-03021633/file/diabla-lre-personal-formatting.pdf},
}
```
### Contributions
This dataset was added by Rachel Bawden [@rbawden](https://github.com/rbawden). |
true |
# Dataset Card for [author_profiling]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/sag111/Author-Profiling
- **Repository:** https://github.com/sag111/Author-Profiling
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Sboev Alexander](mailto:sag111@mail.ru)
### Dataset Summary
The corpus for the author profiling analysis contains texts in Russian-language which labeled for 5 tasks:
1) gender -- 13448 texts with the labels, who wrote this: text female or male;
2) age -- 13448 texts with the labels, how old the person who wrote the text. This is a number from 12 to 80. In addition, for the classification task we added 5 age groups: 0-19; 20-29; 30-39; 40-49; 50+;
3) age imitation -- 8460 texts, where crowdsource authors is asked to write three texts:
a) in their natural manner,
b) imitating the style of someone younger,
c) imitating the style of someone older;
4) gender imitation -- 4988 texts, where the crowdsource authors is asked to write texts: in their origin gender and pretending to be the opposite gender;
5) style imitation -- 4988 texts, where crowdsource authors is asked to write a text on behalf of another person of your own gender, with a distortion of the authors usual style.
Dataset is collected sing the Yandex.Toloka service [link](https://toloka.yandex.ru/en).
You can read the data using the following python code:
```
def load_jsonl(input_path: str) -> list:
"""
Read list of objects from a JSON lines file.
"""
data = []
with open(input_path, 'r', encoding='utf-8') as f:
for line in f:
data.append(json.loads(line.rstrip('\n|\r')))
print('Loaded {} records from {}/n'.format(len(data), input_path))
return data
path_to_file = "./data/train.jsonl"
data = load_jsonl(path_to_file)
```
or you can use HuggingFace style:
```
from datasets import load_dataset
train_df = load_dataset('sagteam/author_profiling', split='train')
valid_df = load_dataset('sagteam/author_profiling', split='validation')
test_df = load_dataset('sagteam/author_profiling', split='test')
```
#### Here are some statistics:
1. For Train file:
- No. of documents -- 9564;
- No. of unique texts -- 9553;
- Text length in characters -- min: 197, max: 2984, mean: 500.5;
- No. of documents written -- by men: 4704, by women: 4860;
- No. of unique authors -- 2344; men: 1172, women: 1172;
- Age of the authors -- min: 13, max: 80, mean: 31.2;
- No. of documents by age group -- 0-19: 813, 20-29: 4188, 30-39: 2697, 40-49: 1194, 50+: 672;
- No. of documents with gender imitation: 1215; without gender imitation: 2430; not applicable: 5919;
- No. of documents with age imitation -- younger: 1973; older: 1973; without age imitation: 1973; not applicable: 3645;
- No. of documents with style imitation: 1215; without style imitation: 2430; not applicable: 5919.
2. For Valid file:
- No. of documents -- 1320;
- No. of unique texts -- 1316;
- Text length in characters -- min: 200, max: 2809, mean: 520.8;
- No. of documents written -- by men: 633, by women: 687;
- No. of unique authors -- 336; men: 168, women: 168;
- Age of the authors -- min: 15, max: 79, mean: 32.2;
- No. of documents by age group -- 1-19: 117, 20-29: 570, 30-39: 339, 40-49: 362, 50+: 132;
- No. of documents with gender imitation: 156; without gender imitation: 312; not applicable: 852;
- No. of documents with age imitation -- younger: 284; older: 284; without age imitation: 284; not applicable: 468;
- No. of documents with style imitation: 156; without style imitation: 312; not applicable: 852.
3. For Test file:
- No. of documents -- 2564;
- No. of unique texts -- 2561;
- Text length in characters -- min: 199, max: 3981, mean: 515.6;
- No. of documents written -- by men: 1290, by women: 1274;
- No. of unique authors -- 672; men: 336, women: 336;
- Age of the authors -- min: 12, max: 67, mean: 31.8;
- No. of documents by age group -- 1-19: 195, 20-29: 1131, 30-39: 683, 40-49: 351, 50+: 204;
- No. of documents with gender imitation: 292; without gender imitation: 583; not applicable: 1689;
- No. of documents with age imitation -- younger: 563; older: 563; without age imitation: 563; not applicable: 875;
- No. of documents with style imitation: 292; without style imitation: 583; not applicable: 1689.
### Supported Tasks and Leaderboards
This dataset is intended for multi-class and multi-label text classification.
The baseline models currently achieve the following F1-weighted metrics scores (table):
| Model name | gender | age_group | gender_imitation | age_imitation | style_imitation | no_imitation | average |
| ------------------- | ------ | --------- | ---------------- | ------------- | --------------- | ------------ | ------- |
| Dummy-stratified | 0.49 | 0.29 | 0.56 | 0.32 | 0.57 | 0.55 | 0.46 |
| Dummy-uniform | 0.49 | 0.23 | 0.51 | 0.32 | 0.51 | 0.51 | 0.43 |
| Dummy-most_frequent | 0.34 | 0.27 | 0.53 | 0.17 | 0.53 | 0.53 | 0.40 |
| LinearSVC + TF-IDF | 0.67 | 0.37 | 0.62 | 0.72 | 0.71 | 0.71 | 0.63 |
### Languages
The text in the dataset is in Russian.
## Dataset Structure
### Data Instances
Each instance is a text in Russian with some author profiling annotations.
An example for an instance from the dataset is shown below:
```
{
'id': 'crowdsource_4916',
'text': 'Ты очень симпатичный, Я давно не с кем не встречалась. Ты мне сильно понравился, ты умный интересный и удивительный, приходи ко мне в гости , у меня есть вкусное вино , и приготовлю вкусный ужин, посидим пообщаемся, узнаем друг друга поближе.',
'account_id': 'account_#1239',
'author_id': 411,
'age': 22,
'age_group': '20-29',
'gender': 'male',
'no_imitation': 'with_any_imitation',
'age_imitation': 'None',
'gender_imitation': 'with_gender_imitation',
'style_imitation': 'no_style_imitation'
}
```
### Data Fields
Data Fields includes:
- id -- unique identifier of the sample;
- text -- authors text written by a crowdsourcing user;
- author_id -- unique identifier of the user;
- account_id -- unique identifier of the crowdsource account;
- age -- age annotations;
- age_group -- age group annotations;
- no_imitation -- imitation annotations.
Label codes:
- 'with_any_imitation' -- there is some imitation in the text;
- 'no_any_imitation' -- the text is written without any imitation
- age_imitation -- age imitation annotations.
Label codes:
- 'younger' -- someone younger than the author is imitated in the text;
- 'older' -- someone older than the author is imitated in the text;
- 'no_age_imitation' -- the text is written without age imitation;
- 'None' -- not supported (the text was not written for this task)
- gender_imitation -- gender imitation annotations.
Label codes:
- 'no_gender_imitation' -- the text is written without gender imitation;
- 'with_gender_imitation' -- the text is written with a gender imitation;
- 'None' -- not supported (the text was not written for this task)
- style_imitation -- style imitation annotations.
Label codes:
- 'no_style_imitation' -- the text is written without style imitation;
- 'with_style_imitation' -- the text is written with a style imitation;
- 'None' -- not supported (the text was not written for this task).
### Data Splits
The dataset includes a set of train/valid/test splits with 9564, 1320 and 2564 texts respectively.
The unique authors do not overlap between the splits.
## Dataset Creation
### Curation Rationale
The formed dataset of examples consists of texts in Russian using a crowdsourcing platform. The created dataset can be used to improve the accuracy of supervised classifiers in author profiling tasks.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from crowdsource platform. Each text was written by the author specifically for the task provided.
#### Who are the source language producers?
Russian-speaking Yandex.Toloka users.
### Annotations
#### Annotation process
We used a crowdsourcing platform to collect texts. Each respondent is asked to fill a questionnaire including their gender, age and native language.
For age imitation task the respondents are to choose a
topic out of a few suggested, and write three texts on it:
1) Text in their natural manner;
2) Text imitating the style of someone younger;
3) Text imitating the style of someone older.
For gender and style imitation task each author wrote three texts in certain different styles:
1) Text in the authors natural style;
2) Text imitating other gender style;
3) Text in a different style but without gender imitation.
The topics to choose from are the following.
- An attempt to persuade some arbitrary listener to meet the respondent at their place;
- A story about some memorable event/acquisition/rumour or whatever else the imaginary listener is supposed to enjoy;
- A story about oneself or about someone else, aiming to please the listener and win their favour;
- A description of oneself and one’s potential partner for a dating site;
- An attempt to persuade an unfamiliar person to come;
- A negative tour review.
The task does not pass checking and is considered improper work if it contains:
- Irrelevant answers to the questionnaire;
- Incoherent jumble of words;
- Chunks of text borrowed from somewhere else;
- Texts not conforming to the above list of topics.
Texts checking is performed firstly by automated search for borrowings (by an anti-plagiarism website), and then by manual review of compliance to the task.
#### Who are the annotators?
Russian-speaking Yandex.Toloka users.
### Personal and Sensitive Information
All personal data was anonymized. Each author has been assigned an impersonal, unique identifier.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Researchers at AI technology lab at NRC "Kurchatov Institute". See the [website](https://sagteam.ru/).
### Licensing Information
Apache License 2.0.
### Citation Information
If you have found our results helpful in your work, feel free to cite our publication.
```
@article{сбоев2022сравнение,
title={СРАВНЕНИЕ ТОЧНОСТЕЙ МЕТОДОВ НА ОСНОВЕ ЯЗЫКОВЫХ И ГРАФОВЫХ НЕЙРОСЕТЕВЫХ МОДЕЛЕЙ ДЛЯ ОПРЕДЕЛЕНИЯ ПРИЗНАКОВ АВТОРСКОГО ПРОФИЛЯ ПО ТЕКСТАМ НА РУССКОМ ЯЗЫКЕ},
author={Сбоев, АГ and Молошников, ИА and Рыбка, РБ and Наумов, АВ and Селиванов, АА},
journal={Вестник Национального исследовательского ядерного университета МИФИ},
volume={10},
number={6},
pages={529--539},
year={2021},
publisher={Общество с ограниченной ответственностью МАИК "Наука/Интерпериодика"}
}
```
### Contributions
Thanks to [@naumov-al](https://github.com/naumov-al) for adding this dataset.
|
true | # AutoTrain Dataset for project: kor_hate_eval
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project kor_hate_eval.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "(\ud604\uc7ac \ud638\ud154\uc8fc\uc778 \uc2ec\uc815) \uc54418 \ub09c \ub9c8\ub978\ud558\ub298\uc5d0 \ub0a0\ubcbc\ub77d\ub9de\uace0 \ud638\ud154\ub9dd\ud558\uac8c\uc0dd\uacbc\ub294\ub370 \ub204\uad70 \uacc4\uc18d \ucd94\ubaa8\ubc1b\ub124....",
"target": 1
},
{
"text": "....\ud55c\uad6d\uc801\uc778 \ubbf8\uc778\uc758 \ub300\ud45c\uc801\uc778 \ubd84...\ub108\ubb34\ub098 \uacf1\uace0\uc544\ub984\ub2e4\uc6b4\ubaa8\uc2b5...\uadf8\ubaa8\uc2b5\ub4a4\uc758 \uc2ac\ud514\uc744 \ubbf8\ucc98 \uc54c\uc9c0\ubabb\ud588\ub124\uc694\u3160",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['Default', 'Spoiled'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 7896 |
| valid | 3770 |
|
true |
# Dataset Card for "NLPCC 2016: Stance Detection in Chinese Microblogs"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://tcci.ccf.org.cn/conference/2016/pages/page05_evadata.html](http://tcci.ccf.org.cn/conference/2016/pages/page05_evadata.html)
- **Repository:**
- **Paper:** [https://link.springer.com/chapter/10.1007/978-3-319-50496-4_85](https://link.springer.com/chapter/10.1007/978-3-319-50496-4_85)
- **Point of Contact:** [Mads Kongsback](https://github.com/mkonxd)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
This is a stance prediction dataset in Chinese.
The data is that from a shared task, stance detection in Chinese microblogs, in NLPCC-ICCPOL 2016. It covers Task A, a mandatory supervised task which detects stance towards five targets of interest with given labeled data.
Some instances of the dataset have been removed, as they were without label.
### Supported Tasks and Leaderboards
* Stance Detection in Chinese Microblogs
### Languages
Chinese, as spoken on the Weibo website (`bcp47:zh`)
## Dataset Structure
### Data Instances
Example instance:
```
{
'id': '0',
'target': 'IphoneSE',
'text': '3月31日,苹果iPhone SE正式开卖,然而这款小屏新机并未出现人们预想的疯抢局面。根据市场分析机构Localytics周一公布的数据,iPhone SE正式上市的这个周末,销量成绩并不算太好。',
'stance': 2
}
```
### Data Fields
* id: a `string` field with a unique id for the instance
* target: a `string` representing the target of the stance
* text: a `string` of the stance-bearing text
* stance: an `int` representing class label -- `0`: AGAINST; `1`: FAVOR; `2`: NONE.
### Data Splits
The training split has 2986 instances
## Dataset Creation
### Curation Rationale
The goal was to create a dataset of microblog text annotated for stance. Six stance targets were selected and data was collected from Sina Weibo for annotation.
### Source Data
#### Initial Data Collection and Normalization
Not specified
#### Who are the source language producers?
Sina Weibo users
### Annotations
#### Annotation process
The stance of each target-microblog pair is duplicated annotated by two students
individually. If these two students provide the same annotation, the stance of this
microblog-target pair is then labeled. If the different annotation is detected, the third
student will be assigned to annotate this pair. Their annotation results will be voted to
obtain the final label.
#### Who are the annotators?
Students in China
### Personal and Sensitive Information
No reflections
## Considerations for Using the Data
### Social Impact of Dataset
The data preserves social media utterances verbatim and so has obviated any right to be forgotten, though usernames and post IDs are not explicitly included in the data.
### Discussion of Biases
There'll be at least a temporal and regional bias to this data, as well as it only representing expressions of stance on six topics.
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@incollection{xu2016overview,
title={Overview of nlpcc shared task 4: Stance detection in chinese microblogs},
author={Xu, Ruifeng and Zhou, Yu and Wu, Dongyin and Gui, Lin and Du, Jiachen and Xue, Yun},
booktitle={Natural language understanding and intelligent applications},
pages={907--916},
year={2016},
publisher={Springer}
}
```
### Contributions
Added by [@mkonxd](https://github.com/mkonxd), [@leondz](https://github.com/leondz)
|
false |
# Dataset Card for CORD (Consolidated Receipt Dataset)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository: https://github.com/clovaai/cord**
- **Paper: https://openreview.net/pdf?id=SJl3z659UH**
- **Leaderboard: https://paperswithcode.com/dataset/cord**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
```python
{
"id": datasets.Value("string"),
"words": datasets.Sequence(datasets.Value("string")),
"bboxes": datasets.Sequence(datasets.Sequence(datasets.Value("int64"))),
"labels": datasets.Sequence(datasets.features.ClassLabel(names=_LABELS)),
"images": datasets.features.Image(),
}
```
### Data Splits
- train (800 rows)
- validation (100 rows)
- test (100 rows)
## Dataset Creation
### Licensing Information
[Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@article{park2019cord,
title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing},
author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
}
```
### Contributions
Thanks to [@clovaai](https://github.com/clovaai) for adding this dataset. |
false |
# laion2B-multi-chinese-subset
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
取自Laion2B多语言多模态数据集中的中文部分,一共143M个图文对。
A subset from Laion2B (a multimodal dataset), around 143M image-text pairs (only Chinese).
## 数据集信息 Dataset Information
大约一共143M个中文图文对。大约占用19GB空间(仅仅是url等文本信息,不包含图片)。
- Homepage: [laion-5b](https://laion.ai/blog/laion-5b/)
- Huggingface: [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
## 下载 Download
```bash
mkdir laion2b_chinese_release && cd laion2b_chinese_release
for i in {00000..00012}; do wget https://huggingface.co/datasets/IDEA-CCNL/laion2B-multi-chinese-subset/resolve/main/data/train-$i-of-00013.parquet; done
cd ..
```
## Lisence
CC-BY-4.0
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
|
false | # Balloons
The [Balloons dataset](https://archive.ics.uci.edu/ml/datasets/Balloons) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Predict if the given balloon is inflated.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|--------------------------------------------|---------------------------|--------------------------------------------------------------------------------------------------|
| adult_or_stretch | Binary classification | Balloons are inflated if age == adult or act == stretch. |
| adult_and_stretch | Binary classification | Balloons are inflated if age == adult and act == stretch. |
| yellow_and_small | Binary classification | Balloons are inflated if color == yellow and size == small. |
| yellow_and_small_or_adult_and_stretch | Binary classification | Balloons are inflated if color == yellow and size == small or age == adult and act == stretch. |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/balloons", "adult_or_stretch")["train"]
```
# Features
|**Feature** |**Type** | **Description** |
|-------------------|-----------|-------------------|
|`color` |`[string]` | Balloon's color. |
|`size` |`[string]` | Balloon's size. |
|`act` |`[string]` | Balloon's state. |
|`age` |`[string]` | Balloon's age. |
|`is_inflated` | `[int8]`| The inflation status of the baloon.| |
false | # Haberman
The [Haberman dataset](https://archive.ics.uci.edu/ml/datasets/Haberman) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Has the patient survived surgery?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------|
| sruvival | Binary classification | Has the patient survived surgery? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/haberman", "survival")["train"]
``` |
false | # pima
The [pima dataset](https://archive.ics.uci.edu/ml/datasets/Ozone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Predict diabetes of a patient.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| pima | Binary classification | Does the patient have diabetes?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/pima")["train"]
``` |
false | # TwoNorm
The [TwoNorm dataset](https://www.openml.org/search?type=data&status=active&id=1507) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| twonorm | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/twonorm")["train"]
```
|
false | # Vertebral Column
The [Vertebral Column dataset](https://archive.ics.uci.edu/ml/datasets/vertebral+column) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| abnormal | Binary classification | Is the spine abnormal?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/vertebral_column")["train"]
``` |
false | # Pol
The [Pol dataset](https://www.openml.org/search?type=data&sort=runs&id=151&status=active) from the [OpenML repository](https://www.openml.org/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| pol | Binary classification | Has the pol cost gone up?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/pol", "pol")["train"]
``` |
false | # Landsat
The [Steel Plates dataset](https://archive-beta.ics.uci.edu/dataset/198/steel+plates+faults) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| steel_plates | Multiclass classification.| |
| steel_plates_0 | Binary classification. | Is the input of class 0? |
| steel_plates_1 | Binary classification. | Is the input of class 1? |
| steel_plates_2 | Binary classification. | Is the input of class 2? |
| steel_plates_3 | Binary classification. | Is the input of class 3? |
| steel_plates_4 | Binary classification. | Is the input of class 4? |
| steel_plates_5 | Binary classification. | Is the input of class 5? |
| steel_plates_6 | Binary classification. | Is the input of class 6? | |
false | # WallFollowing
The [WallFollowing dataset](https://archive-beta.ics.uci.edu/dataset/194/wall+following+robot+navigation+data) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| wall_following | Multiclass classification.| |
| wall_following_0 | Binary classification. | Is the instance of class 0? |
| wall_following_1 | Binary classification. | Is the instance of class 1? |
| wall_following_2 | Binary classification. | Is the instance of class 2? |
| wall_following_3 | Binary classification. | Is the instance of class 3? | |
false |
# Dataset card for "george-chou/HEp2"
## Usage
### Print
```
from datasets import load_dataset
data = load_dataset("george-chou/HEp2")
trainset = data["train"]
validset = data["validation"]
testset = data["test"]
labels = trainset.features["label"].names
for item in trainset:
print("image: ", item["image"])
print("label name: " + labels[item["label"]])
for item in validset:
print("image: ", item["image"])
print("label name: " + labels[item["label"]])
for item in testset:
print("image: ", item["image"])
print("label name: " + labels[item["label"]])
```
### Use on Torch DataLoader
```
import torch
from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision.transforms import *
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
compose = Compose([
Resize(300),
CenterCrop(300),
RandomAffine(5),
ToTensor(),
Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
def transform(example_batch):
inputs = [compose(x.convert("RGB")) for x in example_batch["image"]]
example_batch["image"] = inputs
return example_batch
ds = load_dataset("george-chou/HEp2")
trainset = ds["train"].with_transform(transform)
validset = ds["validation"].with_transform(transform)
testset = ds["test"].with_transform(transform)
traLoader = DataLoader(trainset, batch_size=4)
valLoader = DataLoader(validset, batch_size=4)
tesLoader = DataLoader(testset, batch_size=4)
for i, data in enumerate(traLoader, 0):
inputs, labels = data["image"].to(device), data["label"].to(device)
print("inputs: ", inputs)
print("labels: ", labels)
for i, data in enumerate(valLoader, 0):
inputs, labels = data["image"].to(device), data["label"].to(device)
print("inputs: ", inputs)
print("labels: ", labels)
for i, data in enumerate(tesLoader, 0):
inputs, labels = data["image"].to(device), data["label"].to(device)
print("inputs: ", inputs)
print("labels: ", labels)
```
## Maintenance
```
git clone git@hf.co:datasets/george-chou/HEp2
``` |
false |
从WikiHow页面抽取的中文/英文问答数据
相关项目: [MNBVC](https://github.com/esbatmop/MNBVC)
抽取工具代码:[WikiHowQAExtractor](https://github.com/wanicca/WikiHowQAExtractor) |
false | # Dataset Card for huatuo_consultation_qa
## Dataset Description
- **Homepage: https://www.huatuogpt.cn/**
- **Repository: https://github.com/FreedomIntelligence/HuatuoGPT**
- **Paper: https://arxiv.org/abs/2305.01526**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We collected data from a website for medical consultation , consisting of many online consultation records by medical experts. Each record is a QA pair: a patient raises a question and a medical doctor answers the question. The basic information of doctors (including name, hospital organization, and department) was recorded.
We directly crawl patient’s questions and doctor’s answers as QA pairs, getting 32,708,346 pairs. Subsequently, we removed the QA pairs containing special characters and removed the repeated pairs. Finally, we got 25,341,578 QA pairs.
**Please note that for some reasons we cannot directly provide text data, so the answer part of our data set is url. If you want to use text data, you can refer to the other two parts of our open source datasets ([huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa)、[huatuo_knowledge_graph_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa)), or use url for data collection.**
## Dataset Creation
### Source Data
....
## Citation
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
false |
# Dataset Card for Aksharantar
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/AI4Bharat/IndicLID
- **Paper:** [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Bhasha-Abhijnaanam is a language identification test set for native-script as well as Romanized text which spans 22 Indic languages.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
| <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> |
| -------------- | -------------- | -------------- | --------------- | -------------- | ------------- |
| Assamese (asm) | Hindi (hin) | Maithili (mai) | Nepali (nep) | Sanskrit (san) | Tamil (tam) |
| Bengali (ben) | Kannada (kan) | Malayalam (mal)| Oriya (ori) | Santali (sat) | Telugu (tel) |
| Bodo(brx) | Kashmiri (kas) | Manipuri (mni) | Punjabi (pan) | Sindhi (snd) | Urdu (urd) |
| Gujarati (guj) | Konkani (kok) | Marathi (mar)
## Dataset Structure
### Data Instances
```
A random sample from Hindi (hin) Test dataset.
{
"unique_identifier": "hin1",
"native sentence": "",
"romanized sentence": "",
"language": "Hindi",
"script": "Devanagari",
"source": "Dakshina",
}
```
### Data Fields
- `unique_identifier` (string): 3-letter language code followed by a unique number in Test set.
- `native sentence` (string): A sentence in Indic language.
- `romanized sentence` (string): Transliteration of native sentence in English (Romanized sentence).
- `language` (string): Language of native sentence.
- `script` (string): Script in which native sentence is written.
- `source` (string): Source of the data.
For created data sources, depending on the destination/sampling method of a pair in a language, it will be one of:
- Dakshina Dataset
- Flores-200
- Manually Romanized
- Manually generated
### Data Splits
| Subset | asm | ben | brx | guj | hin | kan | kas (Perso-Arabic) | kas (Devanagari) | kok | mai | mal | mni (Bengali) | mni (Meetei Mayek) | mar | nep | ori | pan | san | sid | tam | tel | urd |
|:------:|:---:|:---:|:---:|:---:|:---:|:---:|:------------------:|:----------------:|:---:|:---:|:---:|:-------------:|:------------------:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Native | 1012 | 5606 | 1500 | 5797 | 5617 | 5859 | 2511 | 1012 | 1500 | 2512 | 5628 | 1012 | 1500 | 5611 | 2512 | 1012 | 5776 | 2510 | 2512 | 5893 | 5779 | 5751 | 6883 |
| Romanized | 512 | 4595 | 433 | 4785 | 4606 | 4848 | 450 | 0 | 444 | 439 | 4617 | 0 | 442 | 4603 | 423 | 512 | 4765 | 448 | 0 | 4881 | 4767 | 4741 | 4371 |
## Dataset Creation
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
#### Who are the source language producers?
[More Information Needed]
### Annotations
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
#### Who are the annotators?
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
<!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" />
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/>
</a>
<br/> -->
This data is released under the following licensing scheme:
- Manually collected data: Released under CC0 license.
**CC0 License Statement**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/>
</a>
<br>
<br>
- We do not own any of the text from which this data has been extracted.
- We license the actual packaging of manually collected data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0).
- To the extent possible under law, <a rel="dct:publisher" href="https://indicnlp.ai4bharat.org/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Aksharantar</span> manually collected data and existing sources.
- This work is published from: India.
### Citation Information
```
@misc{madhani2023bhashaabhijnaanam,
title={Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages},
author={Yash Madhani and Mitesh M. Khapra and Anoop Kunchukuttan},
year={2023},
eprint={2305.15814},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
---
|
false |
# DragonFire0159x/nijijourney-images
Dataset with images generated by niji-journey
Contains only images, no prompts
# What's in the repository
Here are the archives with different dataset sizes
For example, the niji_dataset_404.zip archive contains 404 pictures
You can also use to fine tune the Stable Diffusion |
false |
# TripClick Baselines with Improved Training Data
*Establishing Strong Baselines for TripClick Health Retrieval* Sebastian Hofstätter, Sophia Althammer, Mete Sertkan and Allan Hanbury
https://arxiv.org/abs/2201.00365
**tl;dr** We create strong re-ranking and dense retrieval baselines (BERT<sub>CAT</sub>, BERT<sub>DOT</sub>, ColBERT, and TK) for TripClick (health ad-hoc retrieval). We improve the – originally too noisy – training data with a simple negative sampling policy. We achieve large gains over BM25 in the re-ranking and retrieval setting on TripClick, which were not achieved with the original baselines. We publish the improved training files for everyone to use.
If you have any questions, suggestions, or want to collaborate please don't hesitate to get in contact with us via [Twitter](https://twitter.com/s_hofstaetter) or mail to s.hofstaetter@tuwien.ac.at
**Please cite our work as:**
````
@misc{hofstaetter2022tripclick,
title={Establishing Strong Baselines for TripClick Health Retrieval},
author={Sebastian Hofst{\"a}tter and Sophia Althammer and Mete Sertkan and Allan Hanbury},
year={2022},
eprint={2201.00365},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
````
## Published Training Files
We publish the improved training files without the text content instead using the ids from TripClick (with permission from the TripClick owners); for the text content please get the full TripClick dataset from [the TripClick Github page](https://github.com/tripdatabase/tripclick).
Our training file **improved_tripclick_train_triple-ids.tsv** has the format ``query_id pos_passage_id neg_passage_id`` (with tab separation).
----
For more information on how to use the training files see: https://github.com/sebastian-hofstaetter/tripclick |
true | # AutoNLP Dataset for project: traffic_nlp_binary
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project traffic_nlp_binary.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "1 train is still delayed in both directions",
"target": 1
},
{
"text": "maybe there was no train traffic ????. i know the feeling.",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=2, names=['0', '1'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2195 |
| valid | 549 |
|
false |
# liv4ever v1
This is the Livonian 4-lingual parallel corpus. Livonian is a Uralic / Finnic language with just about 20 fluent speakers and no native speakers (as of 2021). The texts and translations in this corpus were collected from all the digital text resources that could be found by the authors; scanned and printed materials are left for future work.
The corpus includes parallel data for Livonian-Latvian, Livonian-Estonian and Livonian-English; the data has been collected in 2021. After retrieval it was normalized in terms of different orthographies of Livonian and manually sentence-aligned where needed. It was collected from the following sources, with sentence counts per language pair:
* Dictionary - example sentences from the Livonian-Latvian-Estonian dictionary;
* liv-lv: 10'388,
* liv-et: 10'378
* Stalte - the alphabet book by Kōrli Stalte, translated into Estonian and Latvian;
* liv-lv: 842,
* liv-et: 685
* Poetry - the poetry collection book "Ma võtan su õnge, tursk / Ma akūb sīnda vizzõ, tūrska", with Estonian translations;
* liv-et: 770
* Vääri - the book by Eduard Vääri about Livonian language and culture;
* liv-et: 592
* Satversme - translations of the Latvian Constitution into Livonian, Estonian and English;
* liv-en: 380,
* liv-lv: 414,
* liv-et: 413
* Facebook - social media posts by the Livonian Institute and Livonian Days with original translations;
* liv-en: 123,
* liv-lv: 124,
* liv-et: 7
* JEFUL - article abstracts from the Journal of Estonian and Finno-Ugric Linguistics, special issues dedicated to Livonian studies, translated into Estonian and English;
* liv-en: 36,
* liv-et: 49
* Trilium - the book with a collection of Livonian poetry, foreword and afterword translated into Estonian and Latvian;
* liv-lv: 51,
* liv-et: 53
* Songs - material crawled off lyricstranslate.com;
* liv-en: 54,
* liv-lv: 54,
* liv-fr: 31 |
false |
# Dataset Card for SciDTB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/PKU-TANGENT/SciDTB
- **Repository:** https://github.com/PKU-TANGENT/SciDTB
- **Paper:** https://aclanthology.org/P18-2071/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
SciDTB is a domain-specific discourse treebank annotated on scientific articles written in English-language. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. Furthermore, this treebank is made as a benchmark for evaluating discourse dependency parsers. This dataset can benefit many downstream NLP tasks such as machine translation and automatic summarization.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English.
## Dataset Structure
### Data Instances
A typical data point consist of `root` which is a list of nodes in dependency tree. Each node in the list has four fields: `id` containing id for the node, `parent` contains id of the parent node, `text` refers to the span that is part of the current node and finally `relation` represents relation between current node and parent node.
An example from SciDTB train set is given below:
```
{
"root": [
{
"id": 0,
"parent": -1,
"text": "ROOT",
"relation": "null"
},
{
"id": 1,
"parent": 0,
"text": "We propose a neural network approach ",
"relation": "ROOT"
},
{
"id": 2,
"parent": 1,
"text": "to benefit from the non-linearity of corpus-wide statistics for part-of-speech ( POS ) tagging . <S>",
"relation": "enablement"
},
{
"id": 3,
"parent": 1,
"text": "We investigated several types of corpus-wide information for the words , such as word embeddings and POS tag distributions . <S>",
"relation": "elab-aspect"
},
{
"id": 4,
"parent": 5,
"text": "Since these statistics are encoded as dense continuous features , ",
"relation": "cause"
},
{
"id": 5,
"parent": 3,
"text": "it is not trivial to combine these features ",
"relation": "elab-addition"
},
{
"id": 6,
"parent": 5,
"text": "comparing with sparse discrete features . <S>",
"relation": "comparison"
},
{
"id": 7,
"parent": 1,
"text": "Our tagger is designed as a combination of a linear model for discrete features and a feed-forward neural network ",
"relation": "elab-aspect"
},
{
"id": 8,
"parent": 7,
"text": "that captures the non-linear interactions among the continuous features . <S>",
"relation": "elab-addition"
},
{
"id": 9,
"parent": 10,
"text": "By using several recent advances in the activation functions for neural networks , ",
"relation": "manner-means"
},
{
"id": 10,
"parent": 1,
"text": "the proposed method marks new state-of-the-art accuracies for English POS tagging tasks . <S>",
"relation": "evaluation"
}
]
}
```
More such raw data instance can be found [here](https://github.com/PKU-TANGENT/SciDTB/tree/master/dataset)
### Data Fields
- id: an integer identifier for the node
- parent: an integer identifier for the parent node
- text: a string containing text for the current node
- relation: a string representing discourse relation between current node and parent node
### Data Splits
Dataset consists of three splits: `train`, `dev` and `test`.
| Train | Valid | Test |
| ------ | ----- | ---- |
| 743 | 154 | 152|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
More information can be found [here](https://aclanthology.org/P18-2071/)
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{yang-li-2018-scidtb,
title = "{S}ci{DTB}: Discourse Dependency {T}ree{B}ank for Scientific Abstracts",
author = "Yang, An and
Li, Sujian",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-2071",
doi = "10.18653/v1/P18-2071",
pages = "444--449",
abstract = "Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering. In this paper, we present SciDTB, a domain-specific discourse treebank annotated on scientific articles. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. We discuss the labeling framework, annotation workflow and some statistics about SciDTB. Furthermore, our treebank is made as a benchmark for evaluating discourse dependency parsers, on which we provide several baselines as fundamental work.",
}
``` |
false |
# Dataset Card for BibleNLP Corpus
### Dataset Summary
Partial and complete Bible translations in 615 languages, aligned by verse.
### Languages
aau, aaz, abx, aby, acf, acu, adz, aey, agd, agg, agm, agn, agr, agu, aia, ake, alp, alq, als, aly, ame, amk, amp, amr, amu, anh, anv, aoi, aoj, apb, apn, apu, apy, arb, arl, arn, arp, aso, ata, atb, atd, atg, auc, aui, auy, avt, awb, awk, awx, azg, azz, bao, bbb, bbr, bch, bco, bdd, bea, bel, bgs, bgt, bhg, bhl, big, bjr, bjv, bkd, bki, bkq, bkx, bla, blw, blz, bmh, bmk, bmr, bnp, boa, boj, bon, box, bqc, bre, bsn, bsp, bss, buk, bus, bvr, bxh, byx, bzd, bzj, cab, caf, cao, cap, car, cav, cax, cbc, cbi, cbk, cbr, cbs, cbt, cbu, cbv, cco, ces, cgc, cha, chd, chf, chk, chq, chz, cjo, cjv, cle, clu, cme, cmn, cni, cnl, cnt, cof, con, cop, cot, cpa, cpb, cpc, cpu, crn, crx, cso, cta, ctp, ctu, cub, cuc, cui, cut, cux, cwe, daa, dad, dah, ded, deu, dgr, dgz, dif, dik, dji, djk, dob, dwr, dww, dwy, eko, emi, emp, eng, epo, eri, ese, etr, faa, fai, far, for, fra, fuf, gai, gam, gaw, gdn, gdr, geb, gfk, ghs, gia, glk, gmv, gng, gnn, gnw, gof, grc, gub, guh, gui, gul, gum, guo, gvc, gvf, gwi, gym, gyr, hat, haw, hbo, hch, heb, heg, hix, hla, hlt, hns, hop, hrv, hub, hui, hus, huu, huv, hvn, ign, ikk, ikw, imo, inb, ind, ino, iou, ipi, ita, jac, jao, jic, jiv, jpn, jvn, kaq, kbc, kbh, kbm, kdc, kde, kdl, kek, ken, kew, kgk, kgp, khs, kje, kjs, kkc, kky, klt, klv, kms, kmu, kne, knf, knj, kos, kpf, kpg, kpj, kpw, kqa, kqc, kqf, kql, kqw, ksj, ksr, ktm, kto, kud, kue, kup, kvn, kwd, kwf, kwi, kwj, kyf, kyg, kyq, kyz, kze, lac, lat, lbb, leu, lex, lgl, lid, lif, lww, maa, maj, maq, mau, mav, maz, mbb, mbc, mbh, mbl, mbt, mca, mcb, mcd, mcf, mcp, mdy, med, mee, mek, meq, met, meu, mgh, mgw, mhl, mib, mic, mie, mig, mih, mil, mio, mir, mit, miz, mjc, mkn, mks, mlh, mlp, mmx, mna, mop, mox, mph, mpj, mpm, mpp, mps, mpx, mqb, mqj, msb, msc, msk, msm, msy, mti, muy, mva, mvn, mwc, mxb, mxp, mxq, mxt, myu, myw, myy, mzz, nab, naf, nak, nay, nbq, nca, nch, ncj, ncl, ncu, ndj, nfa, ngp, ngu, nhg, nhi, nho, nhr, nhu, nhw, nhy, nif, nin, nko, nld, nlg, nna, nnq, not, nou, npl, nsn, nss, ntj, ntp, nwi, nyu, obo, ong, ons, ood, opm, ote, otm, otn, otq, ots, pab, pad, pah, pao, pes, pib, pio, pir, pjt, plu, pma, poe, poi, pon, poy, ppo, prf, pri, ptp, ptu, pwg, quc, quf, quh, qul, qup, qvc, qve, qvh, qvm, qvn, qvs, qvw, qvz, qwh, qxh, qxn, qxo, rai, rkb, rmc, roo, rop, rro, ruf, rug, rus, sab, san, sbe, seh, sey, sgz, shj, shp, sim, sja, sll, smk, snc, snn, sny, som, soq, spa, spl, spm, sps, spy, sri, srm, srn, srp, srq, ssd, ssg, ssx, stp, sua, sue, sus, suz, swe, swh, swp, sxb, tac, tav, tbc, tbl, tbo, tbz, tca, tee, ter, tew, tfr, tgp, tif, tim, tiy, tke, tku, tna, tnc, tnn, tnp, toc, tod, toj, ton, too, top, tos, tpt, trc, tsw, ttc, tue, tuo, txu, ubr, udu, ukr, uli, ura, urb, usa, usp, uvl, vid, vie, viv, vmy, waj, wal, wap, wat, wbp, wed, wer, wim, wmt, wmw, wnc, wnu, wos, wrk, wro, wsk, wuv, xav, xed, xla, xnn, xon, xsi, xtd, xtm, yaa, yad, yal, yap, yaq, yby, ycn, yka, yml, yre, yuj, yut, yuw, yva, zaa, zab, zac, zad, zai, zaj, zam, zao, zar, zas, zat, zav, zaw, zca, zia, ziw, zos, zpc, zpl, zpo, zpq, zpu, zpv, zpz, zsr, ztq, zty, zyp
## Dataset Structure
### Data Fields
**translation**
- **languages** - an N length list of the languages of the translations, sorted alphabetically
- **translation** - an N length list with the translations each corresponding to the language specified in the above field
**files**
- **lang** - an N length list of the languages of the files, in order of input
- **file** - an N length list of the filenames from the corpus on github, each corresponding with the lang above
**ref** - the verse(s) contained in the record, as a list, with each represented with: ``<a three letter book code> <chapter number>:<verse number>``
**licenses** - an N length list of licenses, corresponding to the list of files above
**copyrights** - information on copyright holders, corresponding to the list of files above
### Usage
The dataset loading script requires installation of tqdm, ijson, and numpy
Specify the languages to be paired with a list and ISO 693-3 language codes, such as ``languages = ['eng', 'fra']``.
By default, the script will return individual verse pairs, as well as verses covering a full range. If only the individual verses is desired, use ``pair='single'``. If only the maximum range pairing is desired use ``pair='range'`` (for example, if one text uses the verse range covering GEN 1:1-3, all texts would return only the full length pairing).
## Sources
https://github.com/BibleNLP/ebible-corpus |
true |
# Dataset Card for Czech Subjectivity Dataset
### Dataset Summary
Czech subjectivity dataset (Subj-CS) of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. See the paper description https://arxiv.org/abs/2204.13915
### Github
https://github.com/pauli31/czech-subjectivity-dataset
### Supported Tasks and Leaderboards
Subjectivity Analysis
### Languages
Czech
### Data Instances
train/dev/test
### Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.](https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Citation Information
If you use our dataset or software for academic research, please cite the our [paper](https://arxiv.org/abs/2204.13915)
```
@article{pib2022czech,
title={Czech Dataset for Cross-lingual Subjectivity Classification},
author={Pavel Přibáň and Josef Steinberger},
year={2022},
eprint={2204.13915},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contact
pribanp@kiv.zcu.cz
### Contributions
Thanks to [@pauli31](https://github.com/pauli31) for adding this dataset. |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
false |
DALL-E-Cats is a dataset meant to produce a synthetic animal dataset. This is a successor to DALL-E-Dogs. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) |
false |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
true | # Dataset Card for MoralExceptQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [MoralCoT](https://github.com/feradauto/MoralCoT)
- **Paper:** [When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment](https://arxiv.org/abs/2210.01478)
- **Point of Contact:** [Fernando Gonzalez](mailto:fgonzalez@ethz.ch) , [Zhijing Jin](mailto:zjin@tue.mpg.de)
### Dataset Summary
Challenge set consisting of moral exception question answering of cases that involve potentially permissible moral exceptions. Our challenge set, MoralExceptQA, is drawn from a series of recent moral psychology studies designed to investigate the flexibility of human moral cognition – specifically, the ability of humans to figure out when it is permissible to break a previously established or well-known rule.
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
Each instance is a rule-breaking scenario acompanied by an average human response.
### Data Fields
- `study`: The moral psychology study. Studies were designed to investigate the ability of humans
to figure out when it is permissible to break a previously established or well-known rule.
- `context`: The context of the scenario. Different context within the same study are potentially governed by the same rule.
- `condition`: Condition in the scenario.
- `scenario`: Text description of the scenario.
- `human.response`: Average human response (scale 0 to 1) equivalent to the % of people that considered that breaking the rule is permissible.
### Data Splits
MoralExceptQA contains one split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
Information about the data collection and annotators can be found in the appendix of [our paper](https://arxiv.org/abs/2210.01478).
### Personal and Sensitive Information
The MoralExceptQA dataset does not have privacy concerns.
## Considerations for Using the Data
### Social Impact of Dataset
The intended use of this work is to contribute to AI safety research. We do not intend this work to be developed as a tool to automate moral decision-making on behalf of humans, but instead as a way of mitigating risks caused by LLMs’ misunderstanding of human values. The MoralExceptQA dataset does not have privacy concerns or offensive content.
### Discussion of Biases
Our subjects are U.S. residents, and therefore our conclusions are limited to this population.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The MoralExceptQA dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2210.01478,
doi = {10.48550/ARXIV.2210.01478},
url = {https://arxiv.org/abs/2210.01478},
author = {Jin, Zhijing and Levine, Sydney and Gonzalez, Fernando and Kamal, Ojasv and Sap, Maarten and Sachan, Mrinmaya and Mihalcea, Rada and Tenenbaum, Josh and Schölkopf, Bernhard},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Computers and Society (cs.CY), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
}
``` |
false |
# Dataset Card for ravnursson_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Ravnursson Faroese Speech and Transcripts](http://hdl.handle.net/20.500.12537/276)
- **Repository:** [Clarin.is](http://hdl.handle.net/20.500.12537/276)
- **Paper:** [Creating a basic language resource kit for faroese.](https://aclanthology.org/2022.lrec-1.495.pdf)
- **Point of Contact:** [Annika Simonsen](mailto:annika.simonsen@hotmail.com), [Carlos Mena](mailto:carlos.mena@ciempiess.org)
### Dataset Summary
The corpus "RAVNURSSON FAROESE SPEECH AND TRANSCRIPTS" (or RAVNURSSON Corpus for short) is a collection of speech recordings with transcriptions intended for Automatic Speech Recognition (ASR) applications in the language that is spoken at the Faroe Islands (Faroese). It was curated at the Reykjavík University (RU) in 2022.
The RAVNURSSON Corpus is an extract of the "Basic Language Resource Kit 1.0" (BLARK 1.0) [1] developed by the Ravnur Project from the Faroe Islands [2]. As a matter of fact, the name RAVNURSSON comes from Ravnur (a tribute to the Ravnur Project) and the suffix "son" which in Icelandic means "son of". Therefore, the name "RAVNURSSON" means "The (Icelandic) son of Ravnur". The double "ss" is just for aesthetics.
The audio was collected by recording speakers reading texts. The participants are aged 15-83, divided into 3 age groups: 15-35, 36-60 and 61+.
The speech files are made of 249 female speakers and 184 male speakers; 433 speakers total. The recordings were made on TASCAM DR-40 Linear PCM audio recorders using the built-in stereo microphones in WAVE 16 bit with a sample rate of 48kHz, but then, downsampled to 16kHz@16bit mono for this corpus.
[1] Simonsen, A., Debess, I. N., Lamhauge, S. S., & Henrichsen, P. J. Creating a basic language resource kit for Faroese. In LREC 2022. 13th International Conference on Language Resources and Evaluation.
[2] Website. The Project Ravnur under the Talutøkni Foundation https://maltokni.fo/en/the-ravnur-project
### Example Usage
The RAVNURSSON Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
```python
from datasets import load_dataset
ravnursson = load_dataset("carlosdanielhernandezmena/ravnursson_asr")
```
To load an specific split (for example, the validation split) do:
```python
from datasets import load_dataset
ravnursson = load_dataset("carlosdanielhernandezmena/ravnursson_asr",split="validation")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The audio is in Faroese.
The reading prompts for the RAVNURSSON Corpus have been generated by expert linguists. The whole corpus was balanced for phonetic and dialectal coverage; Test and Dev subsets are gender-balanced. Tabular computer-searchable information is included as well as written documentation.
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'KAM06_151121_0101',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/32b4a757027b72b8d2e25cd9c8be9c7c919cc8d4eb1a9a899e02c11fd6074536/dev/RDATA2/KAM06_151121/KAM06_151121_0101.flac',
'array': array([ 0.0010376 , -0.00521851, -0.00393677, ..., 0.00128174,
0.00076294, 0.00045776], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': 'KAM06_151121',
'gender': 'female',
'age': '36-60',
'duration': 4.863999843597412,
'normalized_text': 'endurskin eru týdningarmikil í myrkri',
'dialect': 'sandoy'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `age` (string) - range of age of the speaker: Younger (15-35), Middle-aged (36-60) or Elderly (61+).
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription
* `dialect` (string) - dialect group, for example "Suðuroy" or "Sandoy".
### Data Splits
The speech material has been subdivided into portions for training (train), development (evaluation) and testing (test). Lengths of each portion are: train = 100h08m, test = 4h30m, dev (evaluation)=4h30m.
To load an specific portion please see the above section "Example Usage".
The development and test portions have exactly 10 male and 10 female speakers each and both portions have exactly the same size in hours (4.5h each).
## Dataset Creation
### Curation Rationale
The directory called "speech" contains all the speech files of the corpus. The files in the speech directory are divided in three directories: train, dev and test. The train portion is sub-divided in three types of recordings: RDATA1O, RDATA1OP and RDATA2; this is due to the organization of the recordings in the original BLARK 1.0. There, the recordings are divided in Rdata1 and Rdata2.
One main difference between Rdata1 and Rdata2 is that the reading environment for Rdata2 was controlled by a software called "PushPrompt" which is included in the original BLARK 1.0. Another main difference is that in Rdata1 there are some available transcriptions labeled at the phoneme level. For this reason the audio files in the speech directory of the RAVNURSSON corpus are divided in the folders RDATA1O where "O" is for "Orthographic" and RDATA1OP where "O" is for Orthographic and "P" is for phonetic.
In the case of the dev and test portions, the data come only from Rdata2 which does not have labels at the phonetic level.
It is important to clarify that the RAVNURSSON Corpus only includes transcriptions at the orthographic level.
### Source Data
#### Initial Data Collection and Normalization
The dataset was released with normalized text only at an orthographic level in lower-case. The normalization process was performed by automatically removing punctuation marks and characters that are not present in the Faroese alphabet.
#### Who are the source language producers?
* The utterances were recorded using a TASCAM DR-40.
* Participants self-reported their age group, gender, native language and dialect.
* Participants are aged between 15 to 83 years.
* The corpus contains 71949 speech files from 433 speakers, totalling 109 hours and 9 minutes.
### Annotations
#### Annotation process
Most of the reading prompts were selected by experts from a Faroese text corpus (news, blogs, Wikipedia etc.) and were edited to fit the format. Reading prompts that are within specific domains (such as Faroese place names, numbers, license plates, telling time etc.) were written by the Ravnur Project. Then, a software tool called PushPrompt were used for reading sessions (voice recordings). PushPromt presents the text items in the reading material to the reader, allowing him/her to manage the session interactively (adjusting the reading tempo, repeating speech productions at wish, inserting short breaks as needed, etc.). When the reading session is completed, a log file (with time stamps for each production) is written as a data table compliant with the TextGrid-format.
#### Who are the annotators?
The corpus was annotated by the [Ravnur Project](https://maltokni.fo/en/the-ravnur-project)
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This is the first ASR corpus in Faroese.
### Discussion of Biases
As the number of reading prompts was limited, the common denominator in the RAVNURSSON corpus is that one prompt is read by more than one speaker. This is relevant because is a common practice in ASR to create a language model using the prompts that are found in the train portion of the corpus. That is not recommended for the RAVNURSSON Corpus as it counts with many prompts shared by all the portions and that will produce an important bias in the language modeling task.
In this section we present some statistics about the repeated prompts through all the portions of the corpus.
- In the train portion:
* Total number of prompts = 65616
* Number of unique prompts = 38646
There are 26970 repeated prompts in the train portion. In other words, 41.1% of the prompts are repeated.
- In the test portion:
* Total number of prompts = 3002
* Number of unique prompts = 2887
There are 115 repeated prompts in the test portion. In other words, 3.83% of the prompts are repeated.
- In the dev portion:
* Total number of prompts = 3331
* Number of unique prompts = 3302
There are 29 repeated prompts in the dev portion. In other words, 0.87% of the prompts are repeated.
- Considering the corpus as a whole:
* Total number of prompts = 71949
* Number of unique prompts = 39945
There are 32004 repeated prompts in the whole corpus. In other words, 44.48% of the prompts are repeated.
NOTICE!: It is also important to clarify that none of the 3 portions of the corpus share speakers.
### Other Known Limitations
"RAVNURSSON FAROESE SPEECH AND TRANSCRIPTS" by Carlos Daniel Hernández Mena and Annika Simonsen is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
## Additional Information
### Dataset Curators
The dataset was collected by Annika Simonsen and curated by Carlos Daniel Hernández Mena.
### Licensing Information
[CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{carlosmenaravnursson2022,
title={Ravnursson Faroese Speech and Transcripts},
author={Hernandez Mena, Carlos Daniel and Simonsen, Annika},
year={2022},
url={http://hdl.handle.net/20.500.12537/276},
}
```
### Contributions
This project was made possible under the umbrella of the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
Special thanks to Dr. Jón Guðnason, professor at Reykjavík University and head of the Language and Voice Lab (LVL) for providing computational resources.
|
false |
# Dataset Card for Open-Domain Question Answering Wikipedia Corpora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
## Dataset Description
### Dataset Summary
The Wikipedia corpus variants provided can serve as knowledge sources for question-answering systems based on a retriever–reader pipeline. These corpus variants and their corresponding experiments are described further in the paper entitled:
> Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering.
## Dataset Structure
### Data Fields
The dataset consists of passages that have been segmented from Wikipedia articles.
For each passage, the following fields are provided
- ```docid```: The passage id in the format of (X#Y) where passages from the same article share the same X, but Y denotes the segment id within the article
- ```title```: The title of the article from where the passage comes
- ```text```: The text content of the passage
### Data Splits
There are 6 corpus variants in total
- ```wiki-text-100w-karpukhin```: The original DPR Wikipedia corpus with non-overlapping passages, each 100 words long, from Karpukhin et al.,
> Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. [Dense Passage Retrieval for Open-Domain Question Answering](https://www.aclweb.org/anthology/2020.emnlp-main.550/). _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020.
- ```wiki-text-100w-tamber```: Our replication of the above corpus
- ```wiki-text-6-3-tamber```: A corpus similar to above i.e. without tables, infoboxes, and lists. Segmentation is done differently, with a passage size of 6 sentences and a stride of 3 sentences. Note, this means that passages are overlapped.
- ```wiki-text-8-4-tamber```: Like wiki-text-6-3, but with a passage size of 8 sentences and a stride of 4 sentences.
- ```wiki-all-6-3-tamber```: A corpus with tables, infoboxes, and lists included with a passage size of 6 sentences and a stride of 3 sentences.
- ```wiki-all-8-4-tamber```: Like wiki-all-6-3, but with a passage size of 8 sentences and a stride of 4 sentences.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
We start with downloading the full December 20, 2018 Wikipedia XML dump: ```enwiki-20181220-pages-articles.xml``` from the Internet Archive: https://archive.org/details/enwiki-20181220. This is then Pre-processed by WikiExtractor: https://github.com/attardi/wikiextractor (making sure to modify the code to include lists as desired and replacing any tables with the string "TABLETOREPLACE") and DrQA: https://github.com/facebookresearch/DrQA/tree/main/scripts/retriever (again making sure to modify the code to not remove lists as desired).
We then apply the [pre-processing script]((https://github.com/castorini/pyserini/blob/master/docs/experiments-wiki-corpora.md)) we make available in [Pyserini](https://github.com/castorini/pyserini) to generate the different corpus variants.
|
false |
<div align="center">
<img width="640" alt="keremberke/painting-style-classification" src="https://huggingface.co/datasets/keremberke/painting-style-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Realism', 'Art_Nouveau_Modern', 'Analytical_Cubism', 'Cubism', 'Expressionism', 'Action_painting', 'Synthetic_Cubism', 'Symbolism', 'Ukiyo_e', 'Naive_Art_Primitivism', 'Post_Impressionism', 'Impressionism', 'Fauvism', 'Rococo', 'Minimalism', 'Mannerism_Late_Renaissance', 'Color_Field_Painting', 'High_Renaissance', 'Romanticism', 'Pop_Art', 'Contemporary_Realism', 'Baroque', 'New_Realism', 'Pointillism', 'Northern_Renaissance', 'Early_Renaissance', 'Abstract_Expressionism']
```
### Number of Images
```json
{'valid': 1295, 'train': 4493, 'test': 629}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/painting-style-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/art-dataset/wiki-art/dataset/1](https://universe.roboflow.com/art-dataset/wiki-art/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ wiki-art_dataset,
title = { wiki art Dataset },
type = { Open Source Dataset },
author = { Art Dataset },
howpublished = { \\url{ https://universe.roboflow.com/art-dataset/wiki-art } },
url = { https://universe.roboflow.com/art-dataset/wiki-art },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { mar },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 9, 2022 at 1:47 AM GMT
It includes 6417 images.
27 are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
|
true |
# Dataset Card for incivility-arizona-daily-star-comments
This is a collection of more than 6000 comments on Arizona Daily Star news articles from 2011 that have been manually annotated for various forms of incivility including aspersion, namecalling, sarcasm, and vulgarity.
## Dataset Structure
Each instance in the dataset corresponds to a single comment from a single commenter.
An instance's `text` field contains the text of the comment with any quotes of other commenters removed.
The remaining fields in each instance provide binary labels for each type of incivility annotated:
`aspersion`, `hyperbole`, `lying`, `namecalling`, `noncooperation`, `offtopic`, `pejorative`, `sarcasm`, `vulgarity`, and `other_incivility`.
The dataset provides three standard splits: `train`, `validation`, and `test`.
## Dataset Creation
The original annotation effort is described in:
- Kevin Coe, Kate Kenski, Stephen A. Rains.
[Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments](https://doi.org/10.1111/jcom.12104).
Journal of Communication, Volume 64, Issue 4, August 2014, Pages 658–679.
That dataset was converted to a computer-friendly form as described in section 4.2.1 of:
- Farig Sadeque.
[User behavior in social media: engagement, incivility, and depression](https://repository.arizona.edu/handle/10150/633192).
PhD thesis. The University of Arizona. 2019.
The current upload is a 2023 conversion of that form to a huggingface Dataset.
## Considerations for Using the Data
The data is intended for the study of incivility.
It should not be used to train models to generate incivility.
The human coders and their trainers were mostly [Western, educated, industrialized, rich and democratic (WEIRD)](https://www.nature.com/articles/466029a), which may have shaped how they evaluated incivility.
## Citation
```bibtex
@article{10.1111/jcom.12104,
author = {Coe, Kevin and Kenski, Kate and Rains, Stephen A.},
title = {Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments},
journal = {Journal of Communication},
volume = {64},
number = {4},
pages = {658-679},
year = {2014},
month = {06},
issn = {0021-9916},
doi = {10.1111/jcom.12104},
url = {https://doi.org/10.1111/jcom.12104},
}
``` |
false |
# Dataset Card for the Enriched "DCASE 2023 Challenge Task 2 Dataset".
## Table of contents
[//]: # (todo: create new)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Explore the data with Spotlight](#explore-the-data-with-spotlight)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Baseline system](#baseline-system)
- [Dataset Curators](#dataset-curators)
- [Licensing Information - Condition of use](#licensing-information---condition-of-use)
- [Citation Information (original)](#citation-information-original)
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/)
- **Homepage** [DCASE23 Task 2 Challenge](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring#evaluation)
- **Homepage:** [HF Dataset Creator](https://syoy.github.io/)
- **Original Dataset Upload (Dev)** [ZENODO: DCASE 2023 Challenge Task 2 Development Dataset](https://zenodo.org/record/7687464#.Y_9VtdLMLmE)
- **Paper** [MIMII DG](https://arxiv.org/abs/2205.13879)
- **Paper** [ToyADMOS2](https://arxiv.org/abs/2106.02369)
- **Paper** [First-shot anomaly detection for machine condition monitoring: A domain generalization baseline](https://arxiv.org/pdf/2303.00455.pdf)
### Dataset Summary
[Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases. At [Renumics](https://renumics.com/) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
This dataset is an enriched version of the [dataset](https://zenodo.org/record/7690148#.ZAXsSdLMLmE) provided in the context of the [anomalous sound detection task](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring) of the [DCASE2023 challenge](https://dcase.community/challenge2023/). The enrichment include an embedding generated by a pre-trained [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer#transformers.ASTFeatureExtractor) and results of the official challenge [baseline implementation](https://github.com/nttcslab/dase2023_task2_baseline_ae).
### DCASE23 Task2 Dataset
Once a year, the [DCASE community](https://dcase.community/) publishes a [challenge](https://dcase.community/challenge2023/) with several tasks in the context of acoustic event detection and classification. [Task 2 of this challenge](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring) deals with anomalous sound detection for machine condition monitoring. The original dataset is based on the [MIMII DG](https://arxiv.org/abs/2205.13879) and the [ToyADMOS2](https://arxiv.org/abs/2106.02369) datasets. Please cite the papers by [Harada et al.](https://arxiv.org/abs/2106.02369) and [Dohi et al.](https://arxiv.org/abs/2205.13879) if you use this dataset and the paper by [Harada et al.](https://arxiv.org/pdf/2303.00455.pdf) if you use the baseline results.
### Explore Dataset

The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool Renumics Spotlight enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets[audio]
```
> **_Notice:_** On Linux, non-Python dependency on libsndfile package must be installed manually. See [Datasets - Installation](https://huggingface.co/docs/datasets/installation#audio) for more information.
Load the dataset from huggingface in your notebook:
```python
import datasets
dataset = datasets.load_dataset("renumics/dcase23-task2-enriched", "dev", split="all", streaming=False)
```
Start exploring with a simple view that leverages embeddings to identify relevant data segments:
```python
from renumics import spotlight
df = dataset.to_pandas()
simple_layout = datasets.load_dataset_builder("renumics/dcase23-task2-enriched", "dev").config.get_layout(config="simple")
spotlight.show(df, dtype={'path': spotlight.Audio, "embeddings_ast-finetuned-audioset-10-10-0.4593": spotlight.Embedding}, layout=simple_layout)
```
You can use the UI to interactively configure the view on the data. Depending on the concrete taks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
In this example we focus on the valve class. We specifically look at normal data points that have high anomaly scores in both models. This is one example on how to find difficult example or edge cases:
```python
from renumics import spotlight
extended_layout = datasets.load_dataset_builder("renumics/dcase23-task2-enriched", "dev").config.get_layout(config="extended")
spotlight.show(df, dtype={'path': spotlight.Audio, "embeddings_ast-finetuned-audioset-10-10-0.4593": spotlight.Embedding}, layout=extended_layout)
```

## Using custom model results and enrichments
When developing your custom model you want to use different kinds of information from you model (e.g. embedding, anomaly scores etc.) to gain further insights into the dataset and the model behvior.
Suppose you have your model's embeddings for each datapoint as a 2D-Numpy array called `embeddings` and your anomaly score as a 1D-Numpy array called `anomaly_scores`. Then you can add this information to the dataset:
```python
df['my_model_embedding'] = embeddings
df['anomaly_score'] = anomaly_scores
```
Depending on your concrete task you might want to use different enrichments. For a good overview on great open source tooling for uncertainty quantification, explainability and outlier detection, you can take a look at our [curated list for open source data-centric AI tooling](https://github.com/Renumics/awesome-open-data-centric-ai) on Github.
You can also save your view configuration in Spotlight in a JSON configuration file by clicking on the respective icon:

For more information how to configure the Spotlight UI please refer to the [documentation](https://spotlight.renumics.com).
## Dataset Structure
### Data Instances
For each instance, there is a Audio for the audio, a string for the path, an integer for the section, a string for the d1p (parameter), a string for the d1v (value),
a ClassLabel for the label and a ClassLabel for the class.
```python
{'audio': {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414,
0. , 0. ], dtype=float32),
'path': 'train/fan_section_01_source_train_normal_0592_f-n_A.wav',
'sampling_rate': 16000
}
'path': 'train/fan_section_01_source_train_normal_0592_f-n_A.wav'
'section': 1
'd1p': 'f-n'
'd1v': 'A'
'd2p': 'nan'
'd2v': 'nan'
'd3p': 'nan'
'd3v': 'nan'
'domain': 0 (source)
'label': 0 (normal)
'class': 1 (fan)
'dev_train_lof_anomaly': 0
'dev_train_lof_anomaly_score': 1.241023
'add_train_lof_anomaly': 1
'add_train_lof_anomaly_score': 1.806289
'ast-finetuned-audioset-10-10-0.4593-embeddings': [0.8152204155921936,
1.5862374305725098, ...,
1.7154160737991333]
}
```
The length of each audio file is 10 seconds.
### Data Fields
- `audio`: an `datasets.Audio`
- `path`: a string representing the path of the audio file inside the _tar.gz._-archive.
- `section`: an integer representing the section, see [Definition](#Description)
- `d*p`: a string representing the name of the d*-parameter
- `d*v`: a string representing the value of the corresponding d*-parameter
- `domain`: an integer whose value may be either _0_, indicating that the audio sample is from the _source_ domain, _1_, indicating that the audio sample is from the _target_.
- `class`: an integer as class label.
- `label`: an integer whose value may be either _0_, indicating that the audio sample is _normal_, _1_, indicating that the audio sample contains an _anomaly_.
- '[X]_lof_anomaly': an integer as anomaly indicator. The anomaly prediction is computed with the [Local Outlier Factor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html) algorithm based on the "[X]"-dataset.
- '[X]_lof_anomaly_score': a float as anomaly score. The anomaly score is computed with the [Local Outlier Factor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html) algorithm based on the "[X]"-dataset.
- `embeddings_ast-finetuned-audioset-10-10-0.4593`: an `datasets.Sequence(Value("float32"), shape=(1, 768))` representing audio embeddings that are generated with an [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer#transformers.ASTFeatureExtractor).
### Data Splits
The development dataset has 2 splits: _train_ and _test_.
| Dataset Split | Number of Instances in Split | Source Domain / Target Domain Samples |
| ------------- |------------------------------|---------------------------------------|
| Train | 7000 | 6930 / 70 |
| Test | 1400 | 700 / 700 |
The additional training dataset has 1 split: _train_.
| Dataset Split | Number of Instances in Split | Source Domain / Target Domain Samples |
| ------------- |------------------------------|---------------------------------------|
| Train | 7000 | 6930 / 70 |
The evaluation dataset has 1 split: _test_.
| Dataset Split | Number of Instances in Split | Source Domain / Target Domain Samples |
|---------------|------------------------------|---------------------------------------|
| Test | 1400 | ? |
## Dataset Creation
The following information is copied from the original [dataset upload on zenodo.org](https://zenodo.org/record/7690148#.ZAXsSdLMLmE)
### Curation Rationale
This dataset is the "development dataset" for the [DCASE 2023 Challenge Task 2 "First-Shot Unsupervised Anomalous Sound Detection for Machine Condition Monitoring"](https://dcase.community/challenge2023/task-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring).
The data consists of the normal/anomalous operating sounds of seven types of real/toy machines. Each recording is a single-channel 10-second audio that includes both a machine's operating sound and environmental noise. The following seven types of real/toy machines are used in this task:
- ToyCar
- ToyTrain
- Fan
- Gearbox
- Bearing
- Slide rail
- Valve
The "additional training data" and "evaluation data" datasets contain the following classes:
- bandsaw
- grinder
- shaker
- ToyDrone
- ToyNscale
- ToyTank
- Vacuum
### Source Data
#### Definition
We first define key terms in this task: "machine type," "section," "source domain," "target domain," and "attributes.".
- "Machine type" indicates the type of machine, which in the development dataset is one of seven: fan, gearbox, bearing, slide rail, valve, ToyCar, and ToyTrain.
- A section is defined as a subset of the dataset for calculating performance metrics.
- The source domain is the domain under which most of the training data and some of the test data were recorded, and the target domain is a different set of domains under which some of the training data and some of the test data were recorded. There are differences between the source and target domains in terms of operating speed, machine load, viscosity, heating temperature, type of environmental noise, signal-to-noise ratio, etc.
- Attributes are parameters that define states of machines or types of noise.
#### Description
This dataset consists of seven machine types. For each machine type, one section is provided, and the section is a complete set of training and test data. For each section, this dataset provides (i) 990 clips of normal sounds in the source domain for training, (ii) ten clips of normal sounds in the target domain for training, and (iii) 100 clips each of normal and anomalous sounds for the test. The source/target domain of each sample is provided. Additionally, the attributes of each sample in the training and test data are provided in the file names and attribute csv files.
#### Recording procedure
Normal/anomalous operating sounds of machines and its related equipment are recorded. Anomalous sounds were collected by deliberately damaging target machines. For simplifying the task, we use only the first channel of multi-channel recordings; all recordings are regarded as single-channel recordings of a fixed microphone. We mixed a target machine sound with environmental noise, and only noisy recordings are provided as training/test data. The environmental noise samples were recorded in several real factory environments. We will publish papers on the dataset to explain the details of the recording procedure by the submission deadline.
### Supported Tasks and Leaderboards
Anomalous sound detection (ASD) is the task of identifying whether the sound emitted from a target machine is normal or anomalous. Automatic detection of mechanical failure is an essential technology in the fourth industrial revolution, which involves artificial-intelligence-based factory automation. Prompt detection of machine anomalies by observing sounds is useful for monitoring the condition of machines.
This task is the follow-up from DCASE 2020 Task 2 to DCASE 2022 Task 2. The task this year is to develop an ASD system that meets the following four requirements.
**1. Train a model using only normal sound (unsupervised learning scenario)**
Because anomalies rarely occur and are highly diverse in real-world factories, it can be difficult to collect exhaustive patterns of anomalous sounds. Therefore, the system must detect unknown types of anomalous sounds that are not provided in the training data. This is the same requirement as in the previous tasks.
**2. Detect anomalies regardless of domain shifts (domain generalization task)**
In real-world cases, the operational states of a machine or the environmental noise can change to cause domain shifts. Domain-generalization techniques can be useful for handling domain shifts that occur frequently or are hard-to-notice. In this task, the system is required to use domain-generalization techniques for handling these domain shifts. This requirement is the same as in DCASE 2022 Task 2.
**3. Train a model for a completely new machine type**
For a completely new machine type, hyperparameters of the trained model cannot be tuned. Therefore, the system should have the ability to train models without additional hyperparameter tuning.
**4. Train a model using only one machine from its machine type**
While sounds from multiple machines of the same machine type can be used to enhance detection performance, it is often the case that sound data from only one machine are available for a machine type. In such a case, the system should be able to train models using only one machine from a machine type.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Baseline system
The baseline system is available on the Github repository [dcase2023_task2_baseline_ae](https://github.com/nttcslab/dase2023_task2_baseline_ae).The baseline systems provide a simple entry-level approach that gives a reasonable performance in the dataset of Task 2. They are good starting points, especially for entry-level researchers who want to get familiar with the anomalous-sound-detection task.
### Dataset Curators
[//]: # (todo)
[More Information Needed]
### Licensing Information - Condition of use
This is a feature/embeddings-enriched version of the "DCASE 2023 Challenge Task 2 Development Dataset".
The [original dataset](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring#audio-datasets) was created jointly by **Hitachi, Ltd.** and **NTT Corporation** and is available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.
### Citation Information (original)
If you use this dataset, please cite all the following papers. We will publish a paper on DCASE 2023 Task 2, so pleasure make sure to cite the paper, too.
- Kota Dohi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Masaaki Yamamoto, Yuki Nikaido, and Yohei Kawaguchi. MIMII DG: sound dataset for malfunctioning industrial machine investigation and inspection for domain generalization task. In arXiv e-prints: 2205.13879, 2022. [[URL](https://arxiv.org/abs/2205.13879)]
- Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Masahiro Yasuda, and Shoichiro Saito. ToyADMOS2: another dataset of miniature-machine operating sounds for anomalous sound detection under domain shift conditions. In Proceedings of the 6th Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021), 1–5. Barcelona, Spain, November 2021. [[URL](https://dcase.community/documents/workshop2021/proceedings/DCASE2021Workshop_Harada_6.pdf)]
- Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, and Masahiro Yasuda. First-shot anomaly detection for machine condition monitoring: a domain generalization baseline. In arXiv e-prints: 2303.00455, 2023. [[URL](https://arxiv.org/abs/2303.00455.pdf)]
```
@dataset{kota_dohi_2023_7882613,
author = {Kota Dohi and
Keisuke Imoto and
Noboru Harada and
Daisuke Niizumi and
Yuma Koizumi and
Tomoya Nishida and
Harsh Purohit and
Takashi Endo and
Yohei Kawaguchi},
title = {DCASE 2023 Challenge Task 2 Development Dataset},
month = mar,
year = 2023,
publisher = {Zenodo},
version = {3.0},
doi = {10.5281/zenodo.7882613},
url = {https://doi.org/10.5281/zenodo.7882613}
}
``` |
false | # Wine Origin
The [Wine Origin dataset](https://archive-beta.ics.uci.edu/dataset/109/wine) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| wine_origin | Multiclass classification.| |
| wine_origin_0 | Binary classification. | Is the instance of class 0? |
| wine_origin_1 | Binary classification. | Is the instance of class 1? |
| wine_origin_2 | Binary classification. | Is the instance of class 2? | |
false | # Yeast
The [Yeast dataset](https://archive-beta.ics.uci.edu/dataset/110/yeast) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/yeast")["train"]
```
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| yeast | Multiclass classification.| |
| yeast_0 | Binary classification. | Is the instance of class 0? |
| yeast_1 | Binary classification. | Is the instance of class 1? |
| yeast_2 | Binary classification. | Is the instance of class 2? |
| yeast_3 | Binary classification. | Is the instance of class 3? |
| yeast_4 | Binary classification. | Is the instance of class 4? |
| yeast_5 | Binary classification. | Is the instance of class 5? |
| yeast_6 | Binary classification. | Is the instance of class 6? |
| yeast_7 | Binary classification. | Is the instance of class 7? |
| yeast_8 | Binary classification. | Is the instance of class 8? |
| yeast_9 | Binary classification. | Is the instance of class 9? | |
false | # Dataset Card for "thai_wikipedia_clean_20230101"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Thai Wikipedia Database dumps to plain text for NLP work.
This dataset was dump on 1 January 2023 from [Thai wikipedia](https://th.wikipedia.org).
- GitHub: [PyThaiNLP / ThaiWiki-clean](https://github.com/PyThaiNLP/ThaiWiki-clean)
- Notebook for upload to HF: [https://github.com/PyThaiNLP/ThaiWiki-clean/blob/main/thai_wikipedia_clean_20230101_hf.ipynb](https://github.com/PyThaiNLP/ThaiWiki-clean/blob/main/thai_wikipedia_clean_20230101_hf.ipynb) |
false |
# Dataset Description
- **Repository:** https://github.com/X-LANCE/medical-dataset
- **Paper:** https://arxiv.org/abs/2305.15891
# Dataset Summary
CSS is a large-scale cross-schema Chinese text-to-SQL dataset
# Dataset Splits
### Example-based Split
* **train**: 3472 question/SQL pairs
* **dev**: 434 question/SQL pairs
* **test**: 434 question/SQL pairs
### Template-based Split
* **train**: 3470 question/SQL pairs
* **dev**: 430 question/SQL pairs
* **test**: 440 question/SQL pairs
### Schema-based Split
* **train**: 18550 question/SQL pairs
* **dev**: 8150 question/SQL pairs
* **test**: 6920 question/SQL pairs
# Citation Information
@misc{zhang2023css,
title={CSS: A Large-scale Cross-schema Chinese Text-to-SQL Medical Dataset},
author={Hanchong Zhang and Jieyu Li and Lu Chen and Ruisheng Cao and Yunyan Zhang and Yu Huang and Yefeng Zheng and Kai Yu},
year={2023},
eprint={2305.15891},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
false | |
true |
# Dataset Card for Dataset Name
Derived from eastwind/semeval-2016-absa-reviews-arabic using Helsinki-NLP/opus-mt-tc-big-ar-en |
false | |
false |
中文 resume ner 数据集, 来源: https://github.com/luopeixiang/named_entity_recognition 。
数据的格式如下,它的每一行由一个字及其对应的标注组成,标注集采用BIOES,句子之间用一个空行隔开。
```text
美 B-LOC
国 E-LOC
的 O
华 B-PER
莱 I-PER
士 E-PER
我 O
跟 O
他 O
谈 O
笑 O
风 O
生 O
```
|
false |
# Dataset Card for common_voice
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
[Needs More Information]
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Vietnamese
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.
`
{
'file_path': 'audio/_1OsFqkFI38_34.304_39.424.wav',
'script': 'Ik vind dat een dubieuze procedure.',
'audio': {'path': 'audio/_1OsFqkFI38_34.304_39.424.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000}
`
### Data Fields
file_path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
script: The sentence the user was prompted to speak
### Data Splits
The speech material has been subdivided into portions for train, test, validated.
The val, test, train are all data that has been reviewed, deemed of high quality and split into val, test and train.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
### Contributions
Thanks to [@datlq](https://github.com/datlq98) for adding this dataset.
|
true |
# Dataset Card for TE-ca
## Dataset Description
- **Website:** https://zenodo.org/record/4761458
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
TE-ca is a dataset of textual entailment in Catalan, which contains 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction or neutral).
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
Three JSON files, one for each split.
### Example:
<pre>
{
"id": 3247,
"premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
"hypothesis": "S'acorden unes recomanacions per les persones migrades a Marràqueix",
"label": "0"
},
{
"id": 2825,
"premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
"hypothesis": "Les persones migrades seran acollides a Marràqueix",
"label": "1"
},
{
"id": 2431,
"premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
"hypothesis": "L'acord impulsat per l'ONU lluny de tancar-se",
"label": "2"
},
</pre>
### Data Fields
- premise: text
- hypothesis: text related to the premise
- label: relation between premise and hypothesis:
* 0: entailment
* 1: neutral
* 2: contradiction
### Data Splits
* dev.json: 2116 examples
* test.json: 2117 examples
* train.json: 16930 examples
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
Source sentences are extracted from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349) and from [VilaWeb](https://www.vilaweb.cat) newswire.
#### Initial Data Collection and Normalization
12000 sentences from the BSC [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349), together with 6200 headers from the Catalan news site [VilaWeb](https://www.vilaweb.cat), were chosen randomly. We filtered them by different criteria, such as length and stand-alone intelligibility. For each selected text, we commissioned 3 hypotheses (one for each entailment category) to be written by a team of native annotators.
Some sentence pairs were excluded because of inconsistencies.
#### Who are the source language producers?
The Catalan Textual Corpus corpus consists of several corpora gathered from web crawling and public corpora. More information can be found [here](https://doi.org/10.5281/zenodo.4519349).
[VilaWeb](https://www.vilaweb.cat) is a Catalan newswire.
### Annotations
#### Annotation process
We commissioned 3 hypotheses (one for each entailment category) to be written by a team of annotators.
#### Who are the annotators?
Annotators are a team of native language collaborators from two independent companies.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under an <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4529183)
|
false |
# Itihāsa
Itihāsa is a Sanskrit-English translation corpus containing 93,000 Sanskrit shlokas and their English translations extracted from M. N. Dutt's seminal works on The Rāmāyana and The Mahābhārata. The paper which introduced this dataset can be found [here](https://aclanthology.org/2021.wat-1.22/).
This repository contains the randomized train, development, and test sets. The original extracted data can be found [here](https://github.com/rahular/itihasa/tree/gh-pages/res) in JSON format. If you just want to browse the data, you can go [here](http://rahular.com/itihasa/).
## Usage
```
>> from datasets import load_dataset
>> dataset = load_dataset("rahular/itihasa")
>> dataset
DatasetDict({
train: Dataset({
features: ['translation'],
num_rows: 75162
})
validation: Dataset({
features: ['translation'],
num_rows: 6149
})
test: Dataset({
features: ['translation'],
num_rows: 11722
})
})
>> dataset['train'][0]
{'translation': {'en': 'The ascetic Vālmīki asked Nārada, the best of sages and foremost of those conversant with words, ever engaged in austerities and Vedic studies.',
'sn': 'ॐ तपः स्वाध्यायनिरतं तपस्वी वाग्विदां वरम्। नारदं परिपप्रच्छ वाल्मीकिर्मुनिपुङ्गवम्॥'}}
```
## Citation
If you found this dataset to be useful, please consider citing the paper as follows:
```
@inproceedings{aralikatte-etal-2021-itihasa,
title = "Itihasa: A large-scale corpus for {S}anskrit to {E}nglish translation",
author = "Aralikatte, Rahul and
de Lhoneux, Miryam and
Kunchukuttan, Anoop and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 8th Workshop on Asian Translation (WAT2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.wat-1.22",
pages = "191--197",
abstract = "This work introduces Itihasa, a large-scale translation dataset containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We first describe the motivation behind the curation of such a dataset and follow up with empirical analysis to bring out its nuances. We then benchmark the performance of standard translation models on this corpus and show that even state-of-the-art transformer architectures perform poorly, emphasizing the complexity of the dataset.",
}
``` |
false |
# lang-uk's ner-uk dataset
A dataset for Ukrainian Named Entity Recognition.
The original dataset is located at https://github.com/lang-uk/ner-uk. All credit for creation of the dataset goes to the contributors of https://github.com/lang-uk/ner-uk.
# License
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Dataset" property="dct:title" rel="dct:type">"Корпус NER-анотацій українських текстів"</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="https://github.com/lang-uk" property="cc:attributionName" rel="cc:attributionURL">lang-uk</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/lang-uk/ner-uk" rel="dct:source">https://github.com/lang-uk/ner-uk</a>. |
false |
# Dataset Card for taiwanese_english_translation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://taigi.fhl.net/list.html**
### Dataset Summary
[More Information Needed]
### Languages
Source Language: Taiwanese (Tailo romanization system)
Target Language: English
## Dataset Structure
csv: Tailo,English
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@atenglens](https://github.com/atenglens) for adding this dataset. |
false |
# Dataset Card for TruthfulQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/sylinrl/TruthfulQA](https://github.com/sylinrl/TruthfulQA)
- **Repository:** [https://github.com/sylinrl/TruthfulQA](https://github.com/sylinrl/TruthfulQA)
- **Paper:** [https://arxiv.org/abs/2109.07958](https://arxiv.org/abs/2109.07958)
### Dataset Summary
TruthfulQA: Measuring How Models Mimic Human Falsehoods
We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.
### Supported Tasks and Leaderboards
See: [Tasks](https://github.com/sylinrl/TruthfulQA#tasks)
### Languages
English
## Dataset Structure
### Data Instances
The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics.
### Data Fields
1. **Type**: Adversarial v Non-Adversarial Questions
2. **Category**: Category of misleading question
3. **Question**: The question
4. **Best Answer**: The best correct answer
5. **Correct Answers**: A set of correct answers. Delimited by `;`.
6. **Incorrect Answers**: A set of incorrect answers. Delimited by `;`.
7. **Source**: A source that supports the correct answers.
### Data Splits
Due to constraints of huggingface the dataset is loaded into a "train" split.
### Contributions
Thanks to [@sylinrl](https://github.com/sylinrl) for adding this dataset. |
false |
# Dataset Card for GovReport
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Versions](#versions)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://gov-report-data.github.io](https://gov-report-data.github.io)
- **Repository:** [https://github.com/luyang-huang96/LongDocSum](https://github.com/luyang-huang96/LongDocSum)
- **Paper:** [https://aclanthology.org/2021.naacl-main.112/](https://aclanthology.org/2021.naacl-main.112/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Government report dataset consists of reports and associated summaries written by government research agencies including Congressional Research Service and U.S. Government Accountability Office.
Compared with other long document summarization datasets, government report dataset has longer summaries and documents and requires reading in more context to cover salient words to be summarized.
### Versions
- `1.0.1` (default): remove extra whitespace.
- `1.0.0`: the dataset used in the original paper.
To use different versions, set the `revision` argument of the `load_dataset` function.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
Three configs are available:
- **plain_text** (default): the text-to-text summarization setting used as in the original paper.
- **plain_text_with_recommendations**: the text-to-text summarization setting, with "What GAO recommends" included in the summary.
- **structure**: data with the section structure.
To use different configs, set the `name` argument of the `load_dataset` function.
### Data Instances
#### plain_text & plain_text_with_recommendations
An example looks as follows.
```
{
"id": "GAO_123456",
"document": "This is a test document.",
"summary": "This is a test summary"
}
```
#### structure
An example looks as follows.
```
{
"id": "GAO_123456",
"document_sections": {
"title": ["test docment section 1 title", "test docment section 1.1 title"],
"paragraphs": ["test document\nsection 1 paragraphs", "test document\nsection 1.1 paragraphs"],
"depth": [1, 2]
},
"summary_sections": {
"title": ["test summary section 1 title", "test summary section 2 title"],
"paragraphs": ["test summary\nsection 1 paragraphs", "test summary\nsection 2 paragraphs"]
}
}
```
### Data Fields
#### plain_text & plain_text_with_recommendations
- `id`: a `string` feature.
- `document`: a `string` feature.
- `summary`: a `string` feature.
#### structure
- `id`: a `string` feature.
- `document_sections`: a dictionary feature containing lists of (each element corresponds to a section):
- `title`: a `string` feature.
- `paragraphs`: a of `string` feature, with `\n` separating different paragraphs.
- `depth`: a `int32` feature.
- `summary_sections`: a dictionary feature containing lists of (each element corresponds to a section):
- `title`: a `string` feature.
- `paragraphs`: a `string` feature, with `\n` separating different paragraphs.
### Data Splits
- train: 17519
- valid: 974
- test: 973
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Editors of the Congressional Research Service and U.S. Government Accountability Office.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY 4.0
### Citation Information
```
@inproceedings{huang-etal-2021-efficient,
title = "Efficient Attentions for Long Document Summarization",
author = "Huang, Luyang and
Cao, Shuyang and
Parulian, Nikolaus and
Ji, Heng and
Wang, Lu",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.112",
doi = "10.18653/v1/2021.naacl-main.112",
pages = "1419--1436",
abstract = "The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization. In this paper, we propose Hepos, a novel efficient encoder-decoder attention with head-wise positional strides to effectively pinpoint salient information from the source. We further conduct a systematic study of existing efficient self-attentions. Combined with Hepos, we are able to process ten times more tokens than existing models that use full attentions. For evaluation, we present a new dataset, GovReport, with significantly longer documents and summaries. Results show that our models produce significantly higher ROUGE scores than competitive comparisons, including new state-of-the-art results on PubMed. Human evaluation also shows that our models generate more informative summaries with fewer unfaithful errors.",
}
```
|
false |
# Dataset Card for GovReport-QS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://gov-report-data.github.io](https://gov-report-data.github.io)
- **Repository:** [https://github.com/ShuyangCao/hibrids_summ](https://github.com/ShuyangCao/hibrids_summ)
- **Paper:** [https://aclanthology.org/2022.acl-long.58/](https://aclanthology.org/2022.acl-long.58/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Based on the GovReport dataset, GovReport-QS additionally includes annotated question-summary hierarchies for government reports. This hierarchy proactively highlights the document structure, to further promote content engagement and comprehension.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
Two configs are available:
- **paragraph** (default): paragraph-level annotated data
- **document**: aggregated paragraph-level annotated data for the same document
To use different configs, set the `name` argument of the `load_dataset` function.
### Data Instances
#### paragraph
An example looks as follows.
```
{
"doc_id": "GAO_123456",
"summary_paragraph_index": 2,
"document_sections": {
"title": ["test docment section 1 title", "test docment section 1.1 title"],
"paragraphs": ["test document\nsection 1 paragraphs", "test document\nsection 1.1 paragraphs"],
"depth": [1, 2]
},
"question_summary_pairs": {
"question": ["What is the test question 1?", "What is the test question 1.1?"],
"summary": ["This is the test answer 1.", "This is the test answer 1.1"],
"parent_pair_index": [-1, 0]
}
}
```
#### document
An example looks as follows.
```
{
"doc_id": "GAO_123456",
"document_sections": {
"title": ["test docment section 1 title", "test docment section 1.1 title"],
"paragraphs": ["test document\nsection 1 paragraphs", "test document\nsection 1.1 paragraphs"],
"depth": [1, 2],
"alignment": ["h0_title", "h0_full"]
},
"question_summary_pairs": {
"question": ["What is the test question 1?", "What is the test question 1.1?"],
"summary": ["This is the test answer 1.", "This is the test answer 1.1"],
"parent_pair_index": [-1, 0],
"summary_paragraph_index": [2, 2]
}
}
```
### Data Fields
#### paragraph
**Note that document_sections in this config are the sections aligned with the annotated summary paragraph.**
- `doc_id`: a `string` feature.
- `summary_paragraph_index`: a `int32` feature.
- `document_sections`: a dictionary feature containing lists of (each element corresponds to a section):
- `title`: a `string` feature.
- `paragraphs`: a of `string` feature, with `\n` separating different paragraphs.
- `depth`: a `int32` feature.
- `question_summary_pairs`: a dictionary feature containing lists of (each element corresponds to a question-summary pair):
- `question`: a `string` feature.
- `summary`: a `string` feature.
- `parent_pair_index`: a `int32` feature indicating which question-summary pair is the parent of the current pair. `-1` indicates that the current pair does not have parent.
#### document
**Note that document_sections in this config are the all sections in the document.**
- `id`: a `string` feature.
- `document_sections`: a dictionary feature containing lists of (each element corresponds to a section):
- `title`: a `string` feature.
- `paragraphs`: a of `string` feature, with `\n` separating different paragraphs.
- `depth`: a `int32` feature.
- `alignment`: a `string` feature. Whether the `full` section or the `title` of the section should be included when aligned with each annotated hierarchy. For example, `h0_full` indicates that the full section should be included for the hierarchy indexed `0`.
- `question_summary_pairs`: a dictionary feature containing lists of:
- `question`: a `string` feature.
- `summary`: a `string` feature.
- `parent_pair_index`: a `int32` feature indicating which question-summary pair is the parent of the current pair. `-1` indicates that the current pair does not have parent. Note that the indices start from `0` for pairs with the same `summary_paragraph_index`.
- `summary_paragraph_index`: a `int32` feature indicating which summary paragraph the question-summary pair is annotated for.
### Data Splits
#### paragraph
- train: 17519
- valid: 974
- test: 973
#### document
- train: 1371
- valid: 171
- test: 172
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Editors of the Congressional Research Service and U.S. Government Accountability Office.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY 4.0
### Citation Information
```
@inproceedings{cao-wang-2022-hibrids,
title = "{HIBRIDS}: Attention with Hierarchical Biases for Structure-aware Long Document Summarization",
author = "Cao, Shuyang and
Wang, Lu",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.58",
pages = "786--807",
abstract = "Document structure is critical for efficient information consumption. However, it is challenging to encode it efficiently into the modern Transformer architecture. In this work, we present HIBRIDS, which injects Hierarchical Biases foR Incorporating Document Structure into attention score calculation. We further present a new task, hierarchical question-summary generation, for summarizing salient content in the source document into a hierarchy of questions and summaries, where each follow-up question inquires about the content of its parent question-summary pair. We also annotate a new dataset with 6,153 question-summary hierarchies labeled on government reports. Experiment results show that our model produces better question-summary hierarchies than comparisons on both hierarchy quality and content coverage, a finding also echoed by human judges. Additionally, our model improves the generation of long-form summaries from long government reports and Wikipedia articles, as measured by ROUGE scores.",
}
```
|
true | ### Dataset Summary
The dataset contains user reviews about restaurants.
In total it contains 47,139 reviews. A review tagged with the <em>general</em> sentiment and sentiments on 3 aspects: <em>food, interior, service</em>.
### Data Fields
Each sample contains the following fields:
- **review_id**;
- **general**;
- **food**;
- **interior**;
- **service**;
- **text** review text.
### Python
```python3
import pandas as pd
df = pd.read_json('restaurants_reviews.jsonl', lines=True)
df.sample(5)
``` |
false |
# Dataset Card for "coco_captions"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://cocodataset.org/#home](https://cocodataset.org/#home)
- **Repository:** [https://github.com/cocodataset/cocodataset.github.io](https://github.com/cocodataset/cocodataset.github.io)
- **Paper:** [More Information Needed](https://arxiv.org/abs/1405.0312)
- **Point of Contact:** [info@cocodataset.org](info@cocodataset.org)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:** 6.32 MB
### Dataset Summary
COCO is a large-scale object detection, segmentation, and captioning dataset. This repo contains five captions per image; useful for sentence similarity tasks.
Disclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
```
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
...
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/coco_captions")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 82783
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Data Instances
[More Information Needed](https://cocodataset.org/#format-data)
### Data Splits
[More Information Needed](https://cocodataset.org/#format-data)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://cocodataset.org/#home)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://cocodataset.org/#home)
#### Who are the source language producers?
[More Information Needed](https://cocodataset.org/#home)
### Annotations
#### Annotation process
[More Information Needed](https://cocodataset.org/#home)
#### Who are the annotators?
[More Information Needed](https://cocodataset.org/#home)
### Personal and Sensitive Information
[More Information Needed](https://cocodataset.org/#home)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://cocodataset.org/#home)
### Discussion of Biases
[More Information Needed](https://cocodataset.org/#home)
### Other Known Limitations
[More Information Needed](https://cocodataset.org/#home)
## Additional Information
### Dataset Curators
[More Information Needed](https://cocodataset.org/#home)
### Licensing Information
The annotations in this dataset along with this website belong to the COCO Consortium
and are licensed under a [Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
[More Information Needed](https://cocodataset.org/#home)
### Contributions
Thanks to:
- Tsung-Yi Lin - Google Brain
- Genevieve Patterson - MSR, Trash TV
- Matteo R. - Ronchi Caltech
- Yin Cui - Google
- Michael Maire - TTI-Chicago
- Serge Belongie - Cornell Tech
- Lubomir Bourdev - WaveOne, Inc.
- Ross Girshick - FAIR
- James Hays - Georgia Tech
- Pietro Perona - Caltech
- Deva Ramanan - CMU
- Larry Zitnick - FAIR
- Piotr Dollár - FAIR
for adding this dataset.
|
false |
# Dataset Card for "WikiAnswers"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/afader/oqa#wikianswers-corpus](https://github.com/afader/oqa#wikianswers-corpus)
- **Repository:** [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
- **Paper:** [More Information Needed](https://doi.org/10.1145/2623330.2623677)
- **Point of Contact:** [Anthony Fader](https://dl.acm.org/profile/81324489111), [Luke Zettlemoyer](https://dl.acm.org/profile/81100527621), [Oren Etzioni](https://dl.acm.org/profile/99658633129)
### Dataset Summary
The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases.
Each cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
```
{"set": [sentence_1, sentence_2, ..., sentence_25]}
{"set": [sentence_1, sentence_2, ..., sentence_25]}
...
{"set": [sentence_1, sentence_2, ..., sentence_25]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/WikiAnswers")
```
The dataset is loaded as a `DatasetDict` and has the format for `N` examples:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: N
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
#### Who are the source language producers?
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
#### Who are the annotators?
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Personal and Sensitive Information
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Discussion of Biases
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Other Known Limitations
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Licensing Information
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Citation Information
```
@inproceedings{Fader14,
author = {Anthony Fader and Luke Zettlemoyer and Oren Etzioni},
title = {{Open Question Answering Over Curated and Extracted
Knowledge Bases}},
booktitle = {KDD},
year = {2014}
}
```
### Contributions
|
false |
# Introduction
The recognition and classification of proper nouns and names in plain text is of key importance in Natural Language Processing (NLP) as it has a beneficial effect on the performance of various types of applications, including Information Extraction, Machine Translation, Syntactic Parsing/Chunking, etc.
## Corpus of Business Newswire Texts (business)
The Named Entity Corpus for Hungarian is a subcorpus of the Szeged Treebank, which contains full syntactic annotations done manually by linguist experts. A significant part of these texts has been annotated with Named Entity class labels in line with the annotation standards used on the CoNLL-2003 shared task.
Statistical data on Named Entities occurring in the corpus:
```
| tokens | phrases
------ | ------ | -------
non NE | 200067 |
PER | 1921 | 982
ORG | 20433 | 10533
LOC | 1501 | 1294
MISC | 2041 | 1662
```
### Reference
> György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik: Highly accurate Named Entity corpus for Hungarian. International Conference on Language Resources and Evaluation 2006, Genova (Italy)
## Criminal NE corpus (criminal)
The Hungarian National Corpus and its Heti Világgazdaság (HVG) subcorpus provided the basis for corpus text selection: articles related to the topic of financially liable offences were selected and annotated for the categories person, organization, location and miscellaneous.
There are two annotated versions of the corpus. When preparing the tag-for-meaning annotation, our linguists took into consideration the context in which the Named Entity under investigation occurred, thus, it was not the primary sense of the Named Entity that determined the tag (e.g. Manchester=LOC) but its contextual reference (e.g. Manchester won the Premier League=ORG). As for tag-for-tag annotation, these cases were not differentiated: tags were always given on the basis of the primary sense.
Statistical data on Named Entities occurring in the corpus:
```
| tag-for-meaning | tag-for-tag
------ | --------------- | -----------
non NE | 200067 |
PER | 8101 | 8121
ORG | 8782 | 9480
LOC | 5049 | 5391
MISC | 1917 | 854
```
## Metadata
dataset_info:
- config_name: business
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
- name: document_id
dtype: string
- name: sentence_id
dtype: string
splits:
- name: original
num_bytes: 4452207
num_examples: 9573
- name: test
num_bytes: 856798
num_examples: 1915
- name: train
num_bytes: 3171931
num_examples: 6701
- name: validation
num_bytes: 423478
num_examples: 957
download_size: 0
dataset_size: 8904414
- config_name: criminal
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
- name: document_id
dtype: string
- name: sentence_id
dtype: string
splits:
- name: original
num_bytes: 2807970
num_examples: 5375
- name: test
num_bytes: 520959
num_examples: 1089
- name: train
num_bytes: 1989662
num_examples: 3760
- name: validation
num_bytes: 297349
num_examples: 526
download_size: 0
dataset_size: 5615940
|
false |
# Dataset Card for germanDPR-beir
## Dataset Summary
This dataset can be used for [BEIR](https://arxiv.org/abs/2104.08663) evaluation based on [deepset/germanDPR](https://huggingface.co/datasets/deepset/germandpr).
It already has been used to evaluate a newly trained [bi-encoder model](https://huggingface.co/PM-AI/bi-encoder_msmarco_bert-base_german).
The benchmark framework requires a particular dataset structure by default which has been created locally and uploaded here.
Acknowledgement: The dataset was initially created as "[germanDPR](https://www.deepset.ai/germanquad)" by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at deepset.ai.
## Dataset Creation
First, the original dataset [deepset/germanDPR](https://huggingface.co/datasets/deepset/germandpr) was converted into three files for BEIR compatibility:
- The first file is `queries.jsonl` and contains an ID and a question in each line.
- The second file, `corpus.jsonl`, contains in each line an ID, a title, a text and some metadata.
- In the `qrel` folder is the third file. It connects every question from `queries.json` (via `q_id`) with a relevant text/answer from `corpus.jsonl` (via `c_id`)
This process has been done for `train` and `test` split separately based on the original germanDPR dataset.
Approaching the dataset creation like that is necessary because queries AND corpus both differ in deepset's germanDPR dataset
and it might be confusion changing this specific split.
In conclusion, queries and corpus differ between train and test split and not only qrels data!
Note: If you want one big corpus use `datasets.concatenate_datasets()`.
In the original dataset, there is one passage containing the answer and three "wrong" passages for each question.
During the creation of this customized dataset, all four passages are added, but only if they are not already present (... meaning they have been deduplicated).
It should be noted, that BEIR is combining `title` + `text` in `corpus.jsonl` to a new string which may produce odd results:
The original germanDPR dataset does not always contain "classical" titles (i.e. short), but sometimes consists of whole sentences, which are also present in the "text" field.
This results in very long passages as well as duplications.
In addition, both title and text contain specially formatted content.
For example, the words used in titles are often connected with underscores:
> `Apple_Magic_Mouse`
And texts begin with special characters to distinguish headings and subheadings:
> `Wirtschaft_der_Vereinigten_Staaten\n\n== Verschuldung ==\nEin durchschnittlicher Haushalt (...)`
Line breaks are also frequently found, as you can see.
Of course, it depends on the application whether these things become a problem or not.
However, it was decided to release two variants of the original dataset:
- The `original` variant leaves the titles and texts as they are. There are no modifications.
- The `processed` variant removes the title completely and simplifies the texts by removing the special formatting.
The creation of both variants can be viewed in [create_dataset.py](https://huggingface.co/datasets/PM-AI/germandpr-beir/resolve/main/create_dataset.py).
In particular, the following parameters were used:
- `original`: `SPLIT=test/train, TEXT_PREPROCESSING=False, KEEP_TITLE=True`
- `processed`: `SPLIT=test/Train, TEXT_PREPROCESSING=True, KEEP_TITLE=False`
One final thing to mention: The IDs for queries and the corpus should not match!!!
During the evaluation using BEIR, it was found that if these IDs match, the result for that entry is completely removed.
This means some of the results are missing.
A correct calculation of the overall result is no longer possible.
Have a look into [BEIR's evaluation.py](https://github.com/beir-cellar/beir/blob/c3334fd5b336dba03c5e3e605a82fcfb1bdf667d/beir/retrieval/evaluation.py#L49) for further understanding.
## Dataset Usage
As earlier mentioned, this dataset is intended to be used with the BEIR benchmark framework.
The file and data structure required for BEIR can only be used to a limited extent with Huggingface Datasets or it is necessary to define multiple dataset repositories at once.
To make it easier, the [dl_dataset.py](https://huggingface.co/datasets/PM-AI/germandpr-beir/tree/main/dl_dataset.py) script is provided to download the dataset and to ensure the correct file and folder structure.
```python
# dl_dataset.py
import json
import os
import datasets
from beir.datasets.data_loader import GenericDataLoader
# ----------------------------------------
# This scripts downloads the BEIR compatible deepsetDPR dataset from "Huggingface Datasets" to your local machine.
# Please see dataset's description/readme to learn more about how the dataset was created.
# If you want to use deepset/germandpr without any changes, use TYPE "original"
# If you want to reproduce PM-AI/bi-encoder_msmarco_bert-base_german, use TYPE "processed"
# ----------------------------------------
TYPE = "processed" # or "original"
SPLIT = "train" # or "train"
DOWNLOAD_DIR = "germandpr-beir-dataset"
DOWNLOAD_DIR = os.path.join(DOWNLOAD_DIR, f'{TYPE}/{SPLIT}')
DOWNLOAD_QREL_DIR = os.path.join(DOWNLOAD_DIR, f'qrels/')
os.makedirs(DOWNLOAD_QREL_DIR, exist_ok=True)
# for BEIR compatibility we need queries, corpus and qrels all together
# ensure to always load these three based on the same type (all "processed" or all "original")
for subset_name in ["queries", "corpus", "qrels"]:
subset = datasets.load_dataset("PM-AI/germandpr-beir", f'{TYPE}-{subset_name}', split=SPLIT)
if subset_name == "qrels":
out_path = os.path.join(DOWNLOAD_QREL_DIR, f'{SPLIT}.tsv')
subset.to_csv(out_path, sep="\t", index=False)
else:
if subset_name == "queries":
_row_to_json = lambda row: json.dumps({"_id": row["_id"], "text": row["text"]}, ensure_ascii=False)
else:
_row_to_json = lambda row: json.dumps({"_id": row["_id"], "title": row["title"], "text": row["text"]}, ensure_ascii=False)
with open(os.path.join(DOWNLOAD_DIR, f'{subset_name}.jsonl'), "w", encoding="utf-8") as out_file:
for row in subset:
out_file.write(_row_to_json(row) + "\n")
# GenericDataLoader is part of BEIR. If everything is working correctly we can now load the dataset
corpus, queries, qrels = GenericDataLoader(data_folder=DOWNLOAD_DIR).load(SPLIT)
print(f'{SPLIT} corpus size: {len(corpus)}\n'
f'{SPLIT} queries size: {len(queries)}\n'
f'{SPLIT} qrels: {len(qrels)}\n')
print("--------------------------------------------------------------------------------------------------------------\n"
"Now you can use the downloaded files in BEIR framework\n"
"Example: https://github.com/beir-cellar/beir/blob/v1.0.1/examples/retrieval/evaluation/dense/evaluate_sbert.py\n"
"--------------------------------------------------------------------------------------------------------------")
```
Alternatively, the data sets can be downloaded directly:
- https://huggingface.co/datasets/PM-AI/germandpr-beir/resolve/main/data/original.tar.gz
- https://huggingface.co/datasets/PM-AI/germandpr-beir/resolve/main/data/processed.tar.gz
Now you can use the downloaded files in BEIR framework:
- For Example: [evaluate_sbert.py](https://github.com/beir-cellar/beir/blob/v1.0.1/examples/retrieval/evaluation/dense/evaluate_sbert.py)
- Just set variable `"dataset"` to `"germandpr-beir-dataset/processed/test"` or `"germandpr-beir-dataset/original/test"`.
- Same goes for `"train"`.
## Dataset Sizes
- Original **train** `corpus` size, `queries` size and `qrels` size: `24009`, `9275` and `9275`
- Original **test** `corpus` size, `queries` size and `qrels` size: `2876`, `1025` and `1025`
- Processed **train** `corpus` size, `queries` size and `qrels` size: `23993`, `9275` and `9275`
- Processed **test** `corpus` size, `queries` size and `qrels` size: `2875` and `1025` and `1025`
## Languages
This dataset only supports german (aka. de, DE).
## Acknowledgment
The dataset was initially created as "[deepset/germanDPR](https://www.deepset.ai/germanquad)" by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at [deepset.ai](https://www.deepset.ai/).
This work is a collaboration between [Technical University of Applied Sciences Wildau (TH Wildau)](https://en.th-wildau.de/) and [sense.ai.tion GmbH](https://senseaition.com/).
You can contact us via:
* [Philipp Müller (M.Eng.)](https://www.linkedin.com/in/herrphilipps); Author
* [Prof. Dr. Janett Mohnke](mailto:icampus@th-wildau.de); TH Wildau
* [Dr. Matthias Boldt, Jörg Oehmichen](mailto:info@senseaition.com); sense.AI.tion GmbH
This work was funded by the European Regional Development Fund (EFRE) and the State of Brandenburg. Project/Vorhaben: "ProFIT: Natürlichsprachliche Dialogassistenten in der Pflege".
<div style="display:flex">
<div style="padding-left:20px;">
<a href="https://efre.brandenburg.de/efre/de/"><img src="https://huggingface.co/datasets/PM-AI/germandpr-beir/resolve/main/res/EFRE-Logo_rechts_oweb_en_rgb.jpeg" alt="Logo of European Regional Development Fund (EFRE)" width="200"/></a>
</div>
<div style="padding-left:20px;">
<a href="https://www.senseaition.com"><img src="https://senseaition.com/wp-content/uploads/thegem-logos/logo_c847aaa8f42141c4055d4a8665eb208d_3x.png" alt="Logo of senseaition GmbH" width="200"/></a>
</div>
<div style="padding-left:20px;">
<a href="https://www.th-wildau.de"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f6/TH_Wildau_Logo.png/640px-TH_Wildau_Logo.png" alt="Logo of TH Wildau" width="180"/></a>
</div>
</div> |
false |
# Dataset Card for DocBank
## Table of Contents
- [Dataset Card for DocBank](#dataset-card-for-docbank)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://doc-analysis.github.io/docbank-page/index.html
- **Repository:** https://github.com/doc-analysis/DocBank
- **Paper:** https://arxiv.org/abs/2006.01038
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocBank is a new large-scale dataset that is constructed using a weak supervision approach. It enables models to integrate both the textual and layout information for downstream tasks. The current DocBank dataset totally includes 500K document pages, where 400K for training, 50K for validation and 50K for testing.
### Supported Tasks and Leaderboards
Document AI (text and layout)
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
dataset_info:
features:
- name: image
dtype: image
- name: token
dtype: string
- name: bounding_box
sequence:
sequence: uint16
- name: color
sequence:
sequence: uint8
- name: font
dtype: string
- name: label
dtype: string
### Data Splits
dataset_info:
splits:
- name: train
num_bytes: 80004043
num_examples: 400000
- name: validation
num_bytes: 9995812
num_examples: 50000
- name: test
num_bytes: 9995812
num_examples: 50000
download_size: 0
dataset_size: 99995667
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Apache 2.0 License
### Citation Information
title={DocBank: A Benchmark Dataset for Document Layout Analysis},
author={Minghao Li and Yiheng Xu and Lei Cui and Shaohan Huang and Furu Wei and Zhoujun Li and Ming Zhou},
year={2020},
eprint={2006.01038},
archivePrefix={arXiv},
primaryClass={cs.CL}
### Contributions
Thanks to [@doc-analysis](https://github.com/doc-analysis) for adding this dataset. |
false |
# Dataset Card for NusaX-MT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [GitHub](https://github.com/IndoNLP/nusax/tree/main/datasets/mt)
- **Paper:** [EACL 2022](https://arxiv.org/abs/2205.15960)
- **Point of Contact:** [GitHub](https://github.com/IndoNLP/nusax/tree/main/datasets/mt)
### Dataset Summary
NusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.
NusaX-MT is a parallel corpus for training and benchmarking machine translation models across 10 Indonesian local languages + Indonesian and English. The data is presented in csv format with 12 columns, one column for each language.
### Supported Tasks and Leaderboards
- Machine translation for Indonesian languages
### Languages
All possible pairs of the following:
- ace: acehnese,
- ban: balinese,
- bjn: banjarese,
- bug: buginese,
- eng: english,
- ind: indonesian,
- jav: javanese,
- mad: madurese,
- min: minangkabau,
- nij: ngaju,
- sun: sundanese,
- bbc: toba_batak,
## Dataset Creation
### Curation Rationale
There is a shortage of NLP research and resources for the Indonesian languages, despite the country having over 700 languages. With this in mind, we have created this dataset to support future research for the underrepresented languages in Indonesia.
### Source Data
#### Initial Data Collection and Normalization
NusaX-MT is a dataset for machine translation in Indonesian langauges that has been expertly translated by native speakers.
#### Who are the source language producers?
The data was produced by humans (native speakers).
### Annotations
#### Annotation process
NusaX-MT is derived from SmSA, which is the biggest publicly available dataset for Indonesian sentiment analysis. It comprises of comments and reviews from multiple online platforms. To ensure the quality of our dataset, we have filtered it by removing any abusive language and personally identifying information by manually reviewing all sentences. To ensure balance in the label distribution, we randomly picked 1,000 samples through stratified sampling and then translated them to the corresponding languages.
#### Who are the annotators?
Native speakers of both Indonesian and the corresponding languages.
Annotators were compensated based on the number of translated samples.
### Personal and Sensitive Information
Personal information is removed.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
NusaX is created from review text. These data sources may contain some bias.
### Other Known Limitations
No other known limitations
## Additional Information
### Licensing Information
CC-BY-SA 4.0.
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Please contact authors for any information on the dataset.
### Citation Information
```
@misc{winata2022nusax,
title={NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages},
author={Winata, Genta Indra and Aji, Alham Fikri and Cahyawijaya,
Samuel and Mahendra, Rahmad and Koto, Fajri and Romadhony,
Ade and Kurniawan, Kemal and Moeljadi, David and Prasojo,
Radityo Eko and Fung, Pascale and Baldwin, Timothy and Lau,
Jey Han and Sennrich, Rico and Ruder, Sebastian},
year={2022},
eprint={2205.15960},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@afaji](https://github.com/afaji) for adding this dataset.
|
false | # AutoTrain Dataset for project: diffusion-emotion-facial-expression-recognition
## Dataset Description
This dataset has been automatically processed by AutoTrain for project diffusion-emotion-facial-expression-recognition.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<224x224 RGB PIL image>",
"target": 3
},
{
"image": "<224x224 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1028 |
| valid | 261 |
|
false | # Dataset Card for "latin_english_parallel"
101k translation pairs between Latin and English, split 99/1/1 as train/test/val. These have been collected roughly 66% from the Loeb Classical Library and 34% from the Vulgate translation.
For those that were gathered from the Loeb Classical Library, alignment was performd manually between Source and Target sequences. Additionally, the English translations were both 1. copyrighted and 2. outdated. As such, we decided to modernize and transform them into ones that could be used in the public domain, as the original Latin is not copyrighted.
To perform this, we used the gpt3.5-turbo model on OpenAI with the prompt `Translate an old dataset from the 1800s to modern English while preserving the original meaning and exact same sentence structure. Retain extended adjectives, dependent clauses, and punctuation. Output the translation preceded by the text "Modern Translation: ". If a given translation is not a complete sentence, repeat the input sentence. \n'` followed by the source English.
We then manually corrected all outputs that did not conform to the standard.
Each sample is annotated with the index and file (and therefore author/work) that the sample is from. If you find errors, please feel free to submit a PR to fix them.
 |
false |
# Dataset Card for Common Voice Corpus 10.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Toki Pona, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_10_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
``` |
false |
用于训练分词器的基础文本 |
false |
# Xtr-WikiQA
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Amazon Science](https://www.amazon.science/publications/cross-lingual-knowledge-distillation-for-answer-sentence-selection-in-low-resource-languages)
- **Paper:** [Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages](https://arxiv.org/abs/2305.16302)
- **Point of Contact:** [Yoshitomo Matsubara](yomtsub@amazon.com)
### Dataset Summary
***Xtr-WikiQA*** is an Answer Sentence Selection (AS2) dataset in 9 non-English languages, proposed in our paper accepted at ACL 2023 (Findings): **Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages**.
This dataset is based on an English AS2 dataset, WikiQA ([Original](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0), [Hugging Face](https://huggingface.co/datasets/wiki_qa)).
For translations, we used [Amazon Translate](https://aws.amazon.com/translate/).
### Languages
- Arabic (ar)
- Spanish (es)
- French (fr)
- German (de)
- Hindi (hi)
- Italian (it)
- Japanese (ja)
- Dutch (nl)
- Portuguese (pt)
File location: [`tsv/`](https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/tree/main/tsv)
## Dataset Structure
### Data Instances
This is an example instance from the Arabic training split of Xtr-WikiQA dataset.
```
{
"QuestionID": "Q1",
"Question": "كيف تتشكل الكهوف الجليدية؟",
"DocumentID": "D1",
"DocumentTitle": "كهف جليدي",
"SentenceID": "D1-0",
"Sentence": "كهف جليدي مغمور جزئيًا على نهر بيريتو مورينو الجليدي.",
"Label": 0
}
```
All the translated instances in tsv files are listed in the same order of the original (native) instances in the WikiQA dataset.
For example, the 2nd instance in [`tsv/ar-train.tsv`](https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/blob/main/tsv/ar-train.tsv) (Arabic-translated from English)
corresponds to the 2nd instance in [`WikiQA-train.tsv`](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0) (English).
### Data Fields
Each instance (a QA pair) consists of the following fields:
- `QuestionID`: Question ID (str)
- `Question`: Question to be answered (str)
- `DocumentID`: Document ID (str)
- `DocumentTitle`: Document title (str)
- `SentenceID`: Answer sentence in the document (str)
- `Sentence`: Answer sentence in the document (str)
- `Label`: Label that indicates the answer sentence correctly answers the question (int, 1: correct, 0: incorrect)
### Data Splits
| | | **#Questions** | | | | **#Sentences** | |
|-------------------|------------:|---------------:|---------:|---|----------:|---------------:|---------:|
| | **train** | **dev** | **test** | | **train** | **dev** | **test** |
| **Each language** | 873 | 126 | 243 | | 8,671 | 1,130 | 2,351 |
See [our paper](#citation-information) for more details about the statistics of the datasets.
## Dataset Creation
### Source Data
The source of Xtr-WikiQA dataset is [WikiQA](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0).
## Additional Information
### Licensing Information
[CDLA-Permissive-2.0](LICENSE.md)
### Citation Information
```
@article{gupta2023cross-lingual,
title={Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages},
author={Gupta, Shivanshu and Matsubara, Yoshitomo and Chadha, Ankit and Moschitti, Alessandro},
journal={arXiv preprint arXiv:2305.16302},
year={2023}
}
```
### Contributions
- [Shivanshu Gupta](https://huggingface.co/shivanshu)
- [Yoshitomo Matsubara](https://huggingface.co/yoshitomo-matsubara)
- Ankit Chadha
- Alessandro Moschitti |
false |
# Dataset Details
This dataset is a modified version of [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
This dataset is used in fine tuning [Panther](https://huggingface.co/Rardilit/Panther_v1) - an state of the art LLM funtuned on llama-7b pretrained model.
A very small portion i.e. 5.3% of prompts and responses were taken from this dataset to finetune and train [Panther](https://huggingface.co/Rardilit/Panther_v1)
## Dataset Details
### Dataset Structure
### Train
Train rows : 377k
### Validation
Validation rows : 20.3k
### Dataset Format
```python
input : "prompt"
output : "response"
```
## How to Use
```python
from datasets import load_dataset
dataset = load_dataset("Rardilit/Panther-dataset_v1")
``` |
true | https://github.com/atticusg/MoNLI
```
@inproceedings{geiger-etal-2020-neural,
address = {Online},
author = {Geiger, Atticus and Richardson, Kyle and Potts, Christopher},
booktitle = {Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP},
doi = {10.18653/v1/2020.blackboxnlp-1.16},
month = nov,
pages = {163--173},
publisher = {Association for Computational Linguistics},
title = {Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation},
url = {https://www.aclweb.org/anthology/2020.blackboxnlp-1.16},
year = {2020}}
``` |
true |
# Dataset Card for the SCOTUS lifelong editing task
## Dataset Description
- **Homepage: https://github.com/Thartvigsen/GRACE**
- **Repository: https://github.com/Thartvigsen/GRACE**
- **Paper: https://arxiv.org/abs/2211.11031**
- **Point of Contact: Tom Hartvigsen (tomh@mit.edu)**
### Dataset Summary
This dataset contains a relabeled sample from the SCOTUS dataset in [fairlex](https://huggingface.co/datasets/coastalcph/fairlex) as described in [our paper](https://arxiv.org/abs/2211.11031)
### Citation Information
```
@article{hartvigsen2023aging,
title={Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adapters},
author={Hartvigsen, Thomas and Sankaranarayanan, Swami and Palangi, Hamid and Kim, Yoon and Ghassemi, Marzyeh},
journal={arXiv preprint arXiv:2211.11031},
year={2023}
}
``` |
false |
# Dataset Card for Food-101
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Food-101 Dataset](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/)
- **Repository:** N/A
- **Paper:**[Paper](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/static/bossard_eccv14_food-101.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
This dataset consists of 101 food categories, with 101'000 images. For each class, 250 manually reviewed test images are provided as well as 750 training images. On purpose, the training images were not cleaned, and thus still contain some amount of noise. This comes mostly in the form of intense colors and sometimes wrong labels. All images were rescaled to have a maximum side length of 512 pixels.
### Supported Tasks and Leaderboards
- image-classification
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': '/root/.cache/huggingface/datasets/downloads/extracted/6e1e8c9052e9f3f7ecbcb4b90860668f81c1d36d86cc9606d49066f8da8bfb4f/food-101/images/churros/1004234.jpg',
'label': 23
}
```
### Data Fields
The data instances have the following fields:
- `image`: a `string` filepath to an image.
- `label`: an `int` classification label.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|food101|75750|25250|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{bossard14,
title = {Food-101 -- Mining Discriminative Components with Random Forests},
author = {Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc},
booktitle = {European Conference on Computer Vision},
year = {2014}
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
|
true |
# Dataset Card for roman_urdu_hate_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [roman_urdu_hate_speech homepage](https://aclanthology.org/2020.emnlp-main.197/)
- **Repository:** [roman_urdu_hate_speech repository](https://github.com/haroonshakeel/roman_urdu_hate_speech)
- **Paper:** [Hate-Speech and Offensive Language Detection in Roman Urdu](https://aclanthology.org/2020.emnlp-main.197.pdf)
- **Leaderboard:** [N/A]
- **Point of Contact:** [M. Haroon Shakeel](mailto:m.shakeel@lums.edu.pk)
### Dataset Summary
The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.
### Supported Tasks and Leaderboards
- 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.
### Languages
The text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.
## Dataset Structure
### Data Instances
The dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.
For the Coarse grained segment of the dataset the label mapping is:-
Task 1: Coarse-grained Classification Labels
0: Abusive/Offensive
1: Normal
Whereas for the Fine Grained segment of the dataset the label mapping is:-
Task 2: Fine-grained Classification Labels
0: Abusive/Offensive
1: Normal
2: Religious Hate
3: Sexism
4: Profane/Untargeted
An example from Roman Urdu Hate Speech looks as follows:
```
{
'tweet': 'there are some yahodi daboo like imran chore zakat khore'
'label': 0
}
```
### Data Fields
-tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.
-label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators.
### Data Splits
The data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.
The use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits.
The Final split sizes are as follows:
Train Valid Test
7209 2003 801
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Roman Urdu Hate Speech Dataset Repository](https://github.com/haroonshakeel/roman_urdu_hate_speech) which is under MIT License.
### Citation Information
```bibtex
@inproceedings{rizwan2020hate,
title={Hate-speech and offensive language detection in roman Urdu},
author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={2512--2522},
year={2020}
}
```
### Contributions
Thanks to [@bp-high](https://github.com/bp-high), for adding this dataset. |
true |
Machine translated Ohsumed collection (EN to ID)
Original corpora: http://disi.unitn.it/moschitti/corpora.htm
Translated using: https://huggingface.co/Helsinki-NLP/opus-mt-en-id
Compatible with HuggingFace text-classification script (Tested in 4.17)
https://github.com/huggingface/transformers/tree/v4.17.0/examples/pytorch/text-classification
[Moschitti, 2003a]. Alessandro Moschitti, Natural Language Processing and Text Categorization: a study on the reciprocal beneficial interactions, PhD thesis, University of Rome Tor Vergata, Rome, Italy, May 2003. |
false |
# Pages of Early Soviet Performance (PESP)
This dataset was created as part of the [Pages of Early Soviet Performance](https://cdh.princeton.edu/projects/pages-early-soviet-performance/) project at Princeton and provides text and image research data from a previously scanned [collection of illustrated periodicals](https://dpul.princeton.edu/slavic/catalog?f%5Breadonly_collections_ssim%5D%5B%5D=Russian+Illustrated+Periodicals) held by Princeton University's Slavic Collections. The project was a partnership with ITMO University in Saint Petersburg. Our work focused on document segmentation and the prediction of image, text, title, and mixedtext regions in the document images. The mixedtext category refers to segments where the typeface and text layout are mixed with other visual elements such as graphics, photographs, and illustrations. This category identifies sections that present problems for OCR and also highlights the experimental use of text, images, and other elements in the documents.
For each of the ten journals of interest in Princeton's digital collections (DPUL), we started with the IIIF manifest URI. With these manifests, we downloaded each of the 24,000 document images. The URI for each of the images is included in the dataset and a full list is available in `IIIF_URIs.json`.
## Authors
Natalia Ermolaev, Thomas Keenan, Katherine Reischl, Andrew Janco, Quinn Dombrowski, Antonina Puchkovskaia, Alexander Jacobson, Anastasiia Mamonova, Michael Galperin and Vladislav Tretyak
## Journal manifests
- [Эрмитаж](https://figgy.princeton.edu/concern/scanned_resources/6b561fbb-ba28-4afb-91d2-d77b8728d7d9/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/6b561fbb-ba28-4afb-91d2-d77b8728d7d9/manifest)
- [Вестник искусств](https://figgy.princeton.edu/concern/scanned_resources/ad256b35-9ad0-4f75-bf83-3bad1a7c6018/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/ad256b35-9ad0-4f75-bf83-3bad1a7c6018/manifest)
- [Советский театр](https://figgy.princeton.edu/concern/scanned_resources/f33993bb-a041-40a1-b11f-f660da825583/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/f33993bb-a041-40a1-b11f-f660da825583/manifest)
- [Рабис](https://figgy.princeton.edu/concern/scanned_resources/01f4236f-0a2f-473c-946f-d9bbec12f8ea/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/01f4236f-0a2f-473c-946f-d9bbec12f8ea/manifest)
- [Даёшь](https://figgy.princeton.edu/concern/scanned_resources/e036a5da-97a8-4041-ad62-a57af44359e2/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/e036a5da-97a8-4041-ad62-a57af44359e2/manifest)
- [Персимфанс](https://figgy.princeton.edu/concern/scanned_resources/af43d19a-3659-4dd0-a0fc-4c74ce521ad6/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/af43d19a-3659-4dd0-a0fc-4c74ce521ad6/manifest)
- [Тридцать дней](https://figgy.princeton.edu/concern/scanned_resources/d2d488af-2980-4554-a9ef-aacbaf463ec8/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/d2d488af-2980-4554-a9ef-aacbaf463ec8/manifest)
- [За пролетарское искусство](https://figgy.princeton.edu/concern/scanned_resources/38f89d57-8e64-4033-97d6-b925c407584a/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/38f89d57-8e64-4033-97d6-b925c407584a/manifest)
- [Бригада художников](https://figgy.princeton.edu/concern/scanned_resources/66d00a87-5ea9-439a-a909-95d697401a2b/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/66d00a87-5ea9-439a-a909-95d697401a2b/manifest)
- [Зрелища](https://figgy.princeton.edu/concern/scanned_resources/1af8b322-a0b1-46af-8541-5c3054af8098/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/1af8b322-a0b1-46af-8541-5c3054af8098/manifest)
## Model
Using [makesense.ai](https://www.makesense.ai/) and a custom active learning application called ["Mayakovsky"](https://github.com/CDH-ITMO-Periodicals-Project/mayakovsky) we generated training data for a [YOLOv5 model](https://docs.ultralytics.com/tutorials/train-custom-datasets/). The model was fine-tuned on the new labels and predictions were generated for all images in the collection.
## OCR
Using the model's predictions for image, title, text and mixedtext segments, we cropped the image using the bounding boxes and ran OCR on each document segment using Tesseract, Google Vision, and ABBYY FineReader. Given that the output of these various OCR engines can be difficult to compare, the document segments give a common denominator for comparison of OCR outputs. Having three variations of the extracted text can be useful for experiments with OCR post-correction.
## Dataset
The dataset contains an entry for each image with the following fields:
- filename: the image name (ex. 'Советский театр_1932 No. 4_16') with journal name, year, issue, page.
- dpul: the URL for the image's journal in Digital Princeton University Library
- journal: the journal name
- year: the year of the journal issue
- issue: the issue for the image
- URI: the IIIF URI used to fetch the image from Princeton's IIIF server
- yolo: the raw model prediction (ex '3 0.1655 0.501396 0.311'), in Yolo's normalized xywh format (object-class x y width height). The labels are 'image'=0, 'mixedtext'=1, 'title'=2, 'textblock'=3.
- yolo_predictions: a List with a dictionary for each of the model's predictions with fields for:
- label: the predicted label
- x: the x-value location of the center point of the prediction
- y: the y-value location of the center point of the prediction
- w: the total width of the prediction's bounding box
- h: the total height of the prediction's bounding box
- abbyy_text: the text extracted from the predicted document segment using ABBY FineReader. Note that due to costs, only about 800 images have this data
- tesseract_text: the text extracted from the predicted document segment using Tesseract.
- vision_text: the text extracted from the predicted document segment using Google Vision.
- vision_labels: entities recognized by Google Vision in image blocks and separated by | (ex. Boating|Boat|Printmaking)
# Useage
```python
from datasets import load_dataset
dataset = load_dataset('ajanco/pesp')
for item in dataset['train']:
for prediction in item['yolo_predictions']:
print(prediction)
``` |
false | # Dataset Card for SAMSum Corpus
## Dataset Description
### Links
- **Homepage:** https://arxiv.org/abs/1808.08745
- **Repository:** https://arxiv.org/abs/1808.08745
- **Paper:** https://arxiv.org/abs/1808.08745
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
This repository contains data and code for our EMNLP 2018 paper "[Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)".
### Languages
English
## Dataset Structure
### Data Instances
XSum dataset is made of 226711 conversations split into train, test and val.
The first instance in the training set:
{'dialogue': 'The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\n"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we\'re neglected or forgotten," she said.\n"That may not be true but it is perhaps my perspective over the last few days.\n"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?"\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\nThe Labour Party\'s deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\n"I was quite taken aback by the amount of damage that has been done," he said.\n"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses."\nHe said it was important that "immediate steps" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on selkirk.news@bbc.co.uk or dumfries@bbc.co.uk.', 'summary': 'Clean-up operations are continuing across the Scottish Borders and Dumfries and Galloway after flooding caused by Storm Frank.',
'id': '35232142'}
### Data Fields
- dialogue: text of dialogue.
- summary: one line human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 204045
- val: 11332
- test: 11334
## Dataset Creation
### Curation Rationale
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
### Annotation process
## Licensing Information
non-commercial licence: MIT
## Citation Information
```
@InProceedings{xsum-emnlp,
author = "Shashi Narayan and Shay B. Cohen and Mirella Lapata",
title = "Don't Give Me the Details, Just the Summary! {T}opic-Aware Convolutional Neural Networks for Extreme Summarization",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing ",
year = "2018",
address = "Brussels, Belgium",
```
## Contributions
Thanks to [@Edinburgh NLP](https://github.com/EdinburghNLP) for adding this dataset. |
false |
# esCorpius Multilingual
In the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.
## Usage
Replace `revision` with the language of your choice (in this case, `it` for Italian):
```
dataset = load_dataset('LHF/escorpius-m', split='train', streaming=True, revision='it')
```
## Other corpora
- esCorpius-mr multilingual *raw* corpus (not deduplicated): https://huggingface.co/datasets/LHF/escorpius-mr
- esCorpius original *Spanish only* corpus (deduplicated): https://huggingface.co/datasets/LHF/escorpius
## Citation
Link to paper: https://www.isca-speech.org/archive/pdfs/iberspeech_2022/gutierrezfandino22_iberspeech.pdf / https://arxiv.org/abs/2206.15147
Cite this work:
```
@inproceedings{gutierrezfandino22_iberspeech,
author={Asier Gutiérrez-Fandiño and David Pérez-Fernández and Jordi Armengol-Estapé and David Griol and Zoraida Callejas},
title={{esCorpius: A Massive Spanish Crawling Corpus}},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
year=2022,
booktitle={Proc. IberSPEECH 2022},
pages={126--130},
doi={10.21437/IberSPEECH.2022-26}
}
```
## Disclaimer
We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus.
|
false |
## Dataset Description
- **Homepage:** the [Gatherer](https://gatherer.wizards.com/Pages/)
- **Repository:** https://github.com/alcazar90/croupier-mtg-dataset
### Dataset Summary
A card images dataset of 4 types of creatures from Magic the Gathering card game: elf, goblin, knight, and zombie.
## Dataset Creation
All card information from Magic the Gathering card game is public available from the
[Gatherer]( https://gatherer.wizards.com/Pages/) website, the official Magic Card Database. The dataset is just
a subset selection of 4 kind of creatures from the game. |
false |
# Dataset Card for filtered_cuad
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad)
- **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/)
- **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- **Point of Contact:** [Atticus Project Team](info@atticusprojectai.org)
### Dataset Summary
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. This dataset is a filtered version of CUAD. It excludes legal contracts with an Agreement date prior to 2002 and contracts which are not Business to Business. From the 41 categories we filtered them down to 12 which we considered the most crucial.
We wanted a small dataset to quickly fine-tune different models without sacrificing the categories which we deemed as important. The need to remove most questions was due to them not having an answer which is problematic since it can scue the resulting metrics such as the F1 score and the AUPR curve.
CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [44],
"text": ['DISTRIBUTOR AGREEMENT']
},
"context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...',
"id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0",
"question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract",
"title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT"
}
```
### Data Fields
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
This dataset is split into train/test set. Number of samples in each set is given below:
| | Train | Test |
| ----- | ------ | ---- |
| CUAD | 5442 | 936 |
## Dataset Creation
### Curation Rationale
A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.
Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.
To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.
### Source Data
#### Initial Data Collection and Normalization
The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.
Type of Contracts: # of Docs
Affiliate Agreement: 8
Agency Agreement: 8
Collaboration/Cooperation Agreement: 26
Co-Branding Agreement: 6
Consulting Agreement: 11
Development Agreement: 28
Distributor Agreement: 23
Endorsement Agreement: 10
Franchise Agreement: 14
Hosting Agreement: 12
IP Agreement: 16
Joint Venture Agreemen: 22
License Agreement: 32
Maintenance Agreement: 24
Manufacturing Agreement: 6
Marketing Agreement: 16
Non-Compete/No-Solicit/Non-Disparagement Agreement: 3
Outsourcing Agreement: 12
Promotion Agreement: 9
Reseller Agreement: 12
Service Agreement: 24
Sponsorship Agreement: 17
Supply Agreement: 13
Strategic Alliance Agreement: 32
Transportation Agreement: 1
TOTAL: 385
Categories
Document Name
Parties
Agreement Date
Effective Date
Expiration Date
Renewal Term
Notice Period To Terminate Renewal
Governing Law
Non-Compete
Exclusivity
Change Of Control
Anti-Assignment
#### Who are the source language producers?
The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD.
### Annotations
#### Annotation process
The labeling process included multiple steps to ensure accuracy:
1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.
2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.
3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.
4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.
5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.
6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.
7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.
#### Who are the annotators?
Answered in above section.
### Personal and Sensitive Information
Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”).
For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.
For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.
Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:
THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.
Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.
To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.”
Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Attorney Advisors
Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu
Law Student Leaders
John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran
Law Student Contributors
Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin
Technical Advisors & Contributors
Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen
### Licensing Information
CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.
The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.
Privacy Policy & Disclaimers
The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@atticusprojectai.org. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.
The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer.
### Citation Information
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. |
false |
- 38,015,081 rows |
false | # Physical-Action-Effect-Prediction
Official dataset for ["What Action Causes This? Towards Naive Physical Action-Effect Prediction"](https://aclanthology.org/P18-1086/), ACL 2018.

## Overview
Despite recent advances in knowledge representation, automated reasoning, and machine learning, artificial agents still lack the ability to understand basic action-effect relations regarding the physical world, for example, the action of cutting a cucumber most likely leads to the state where the cucumber is broken apart into smaller pieces. If artificial agents (e.g., robots) ever become our partners in joint tasks, it is critical to empower them with such action-effect understanding so that they can reason about the state of the world and plan for actions. Towards this goal, this paper introduces a new task on naive physical action-effect prediction, which addresses the relations between concrete actions (expressed in the form of verb-noun pairs) and their effects on the state of the physical world as depicted by images. We collected a dataset for this task and developed an approach that harnesses web image data through distant supervision to facilitate learning for action-effect prediction. Our empirical results have shown that web data can be used to complement a small number of seed examples (e.g., three examples for each action) for model learning. This opens up possibilities for agents to learn physical action-effect relations for tasks at hand through communication with humans with a few examples.
### Datasets
- This dataset contains action-effect information for 140 verb-noun pairs. It has two parts: effects described by natural language, and effects depicted in images.
- The language data contains verb-noun pairs and their effects described in natural language. For each verb-noun pair, its possible effects are described by 10 different annotators. The format for each line is `verb noun, effect_sentence, [effect_phrase_1, effect_phrase_2, effect_phrase_3, ...]`. Effect_phrases were automatically extracted from their corresponding effect_sentences.
- The image data contains images depicting action effects. For each verb-noun pair, an average of 15 positive images and 15 negative images were collected. Positive images are those deemed to capture the resulting world state of the action. And negative images are those deemed to capture some state of the related object (*i.e.*, the nouns in the verb-noun pairs), but are not the resulting state of the corresponding action.
### Download
```python
from datasets import load_dataset
dataset = load_dataset("sled-umich/Action-Effect")
```
* [HuggingFace](https://huggingface.co/datasets/sled-umich/Action-Effect)
* [Google Drive](https://drive.google.com/drive/folders/1P1_xWdCUoA9bHGlyfiimYAWy605tdXlN?usp=sharing)
* Dropbox:
* [Language Data](https://www.dropbox.com/s/pi1ckzjipbqxyrw/action_effect_sentence_phrase.txt?dl=0)
* [Image Data](https://www.dropbox.com/s/ilmfrqzqcbdf22k/action_effect_image_rs.tar.gz?dl=0)
### Cite
[What Action Causes This? Towards Naïve Physical Action-Effect Prediction](https://sled.eecs.umich.edu/publication/dblp-confacl-vanderwende-cyg-18/). *Qiaozi Gao, Shaohua Yang, Joyce Chai, Lucy Vanderwende*. ACL, 2018. [[Paper]](https://aclanthology.org/P18-1086/) [[Slides]](https://aclanthology.org/attachments/P18-1086.Presentation.pdf)
```tex
@inproceedings{gao-etal-2018-action,
title = "What Action Causes This? Towards Naive Physical Action-Effect Prediction",
author = "Gao, Qiaozi and
Yang, Shaohua and
Chai, Joyce and
Vanderwende, Lucy",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1086",
doi = "10.18653/v1/P18-1086",
pages = "934--945",
abstract = "Despite recent advances in knowledge representation, automated reasoning, and machine learning, artificial agents still lack the ability to understand basic action-effect relations regarding the physical world, for example, the action of cutting a cucumber most likely leads to the state where the cucumber is broken apart into smaller pieces. If artificial agents (e.g., robots) ever become our partners in joint tasks, it is critical to empower them with such action-effect understanding so that they can reason about the state of the world and plan for actions. Towards this goal, this paper introduces a new task on naive physical action-effect prediction, which addresses the relations between concrete actions (expressed in the form of verb-noun pairs) and their effects on the state of the physical world as depicted by images. We collected a dataset for this task and developed an approach that harnesses web image data through distant supervision to facilitate learning for action-effect prediction. Our empirical results have shown that web data can be used to complement a small number of seed examples (e.g., three examples for each action) for model learning. This opens up possibilities for agents to learn physical action-effect relations for tasks at hand through communication with humans with a few examples.",
}
```
|
false | # range3/wiki40b-ja
This dataset consists of three parquet files from the wiki40b dataset with only Japanese data extracted. It is generated by the following python code.
このデータセットは、wiki40bデータセットの日本語データのみを抽出した3つのparquetファイルで構成されます。以下のpythonコードによって生成しています。
```py
import datasets
dss = datasets.load_dataset(
"wiki40b",
"ja",
beam_runner="DirectRunner",
)
for split,ds in dss.items():
ds.to_parquet(f"wikipedia-ja-20230101/{split}.parquet")
``` |
false | # Dataset Card for "AID_MultiLabel"
## Dataset Description
- **Paper:** [AID: A benchmark data set for performance evaluation of aerial scene classification](https://ieeexplore.ieee.org/iel7/36/4358825/07907303.pdf)
- **Paper:** [Relation Network for Multi-label Aerial Image Classification]()
### Licensing Information
CC0: Public Domain
## Citation Information
Imagery:
[AID: A benchmark data set for performance evaluation of aerial scene classification](https://ieeexplore.ieee.org/iel7/36/4358825/07907303.pdf)
Multilabels:
[Relation Network for Multi-label Aerial Image Classification](https://ieeexplore.ieee.org/iel7/36/4358825/08986556.pdf)
```
@article{xia2017aid,
title = {AID: A benchmark data set for performance evaluation of aerial scene classification},
author = {Xia, Gui-Song and Hu, Jingwen and Hu, Fan and Shi, Baoguang and Bai, Xiang and Zhong, Yanfei and Zhang, Liangpei and Lu, Xiaoqiang},
year = 2017,
journal = {IEEE Transactions on Geoscience and Remote Sensing},
publisher = {IEEE},
volume = 55,
number = 7,
pages = {3965--3981}
}
@article{hua2019relation,
title = {Relation Network for Multi-label Aerial Image Classification},
author = {Hua, Yuansheng and Mou, Lichao and Zhu, Xiao Xiang},
year = {DOI:10.1109/TGRS.2019.2963364},
journal = {IEEE Transactions on Geoscience and Remote Sensing}
}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.