text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
false |
This dataset contains 93265 english poems. |
false |
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
false |
This is a parsed version of [github-jupyter-parsed](https://huggingface.co/datasets/codeparrot/github-jupyter-parsed), with markdown and code pairs. We provide the preprocessing script in [preprocessing.py](https://huggingface.co/datasets/codeparrot/github-jupyter-parsed-v2/blob/main/preprocessing.py). The data is deduplicated and consists of 451662 examples.
For similar datasets with text and Python code, there is [CoNaLa](https://huggingface.co/datasets/neulab/conala) benchmark from StackOverflow, with some samples curated by annotators. |
false | |
false |
# ParaDetox: Detoxification with Parallel Data (English)
This repository contains information about Paradetox dataset -- the first parallel corpus for the detoxification task -- as well as models and evaluation methodology for the detoxification of English texts. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
## ParaDetox Collection Pipeline
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
* *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
* *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
* *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
All these steps were done to ensure high quality of the data and make the process of collection automated. For more details please refer to the original paper.
## ParaDetox Dataset
As a result, we get paraphrases for 11,939 toxic sentences (on average 1.66 paraphrases per sentence), 19,766 paraphrases total. The whole dataset can be found [here](https://github.com/skoltech-nlp/paradetox/blob/main/paradetox/paradetox.tsv). The examples of samples from ParaDetox Dataset:
In addition to all ParaDetox dataset, we also make public [samples](https://github.com/skoltech-nlp/paradetox/blob/main/paradetox/paradetox_cannot_rewrite.tsv) that were marked by annotators as "cannot rewrite" in *Task 1* of crowdsource pipeline.
# Detoxification evaluation
The automatic evaluation of the model were produced based on three parameters:
* *style transfer accuracy* (**STA**): percentage of nontoxic outputs identified by a style classifier. We pretrained toxicity classifier on Jigsaw data and put it online in HuggingFace🤗 [repo](https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier).
* *content preservation* (**SIM**): cosine similarity between the embeddings of the original text and the output computed with the model of [Wieting et al. (2019)](https://aclanthology.org/P19-1427/).
* *fluency* (**FL**): percentage of fluent sentences identified by a RoBERTa-based classifier of linguistic acceptability trained on the [CoLA dataset](https://nyu-mll.github.io/CoLA/).
All code used for our experiments to evluate different detoxifcation models can be run via Colab notebook [](https://colab.research.google.com/drive/1xTqbx7IPF8bVL2bDCfQSDarA43mIPefE?usp=sharing)
## Detoxification model
**New SOTA** for detoxification task -- BART (base) model trained on ParaDetox dataset -- we released online in HuggingFace🤗 repository [here](https://huggingface.co/SkolkovoInstitute/bart-base-detox).
You can also check out our [demo](https://detoxifier.nlp.zhores.net/junction/) and telegram [bot](https://t.me/rudetoxifierbot).
## Citation
```
@inproceedings{logacheva-etal-2022-paradetox,
title = "{P}ara{D}etox: Detoxification with Parallel Data",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Ustyantsev, Sergey and
Moskovskiy, Daniil and
Dale, David and
Krotova, Irina and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.469",
pages = "6804--6818",
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
}
```
## Contacts
If you find some issue, do not hesitate to add it to [Github Issues](https://github.com/skoltech-nlp/paradetox/issues).
For any questions, please contact: Daryna Dementieva (dardem96@gmail.com) |
false |
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
false |
# ru_instruct_gpt4
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset of GPT-4 generated instructions in Russian. Will soon be updated with more examples.
### Languages
Russian
|
false |
# Dataset Card for SCAT
## Table of Contents
- [Dataset Card for SCAT](#dataset-card-for-scat)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Machine Translation](#machine-translation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Preprocessing](#additional-preprocessing)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Github](https://github.com/neulab/contextual-mt/tree/master/data/scat)
- **Paper:** [ACL 2021](https://aclanthology.org/2021.acl-long.65/)
- **Point of Contact:** [Kayo Yin](mailto:kayoyin@berkeley.edu)
### Dataset Summary
The Supporting Context for Ambiguous Translations corpus (SCAT) is a dataset of English-to-French translations annotated with human rationales used for resolving ambiguity in pronoun anaphora resolution for multi-sentence translation.
**Disclaimer**: *The SCAT corpus was released in the ACL 2021 paper ["Do Context-Aware Translation Models Pay the Right Attention?"](https://aclanthology.org/2021.acl-long.65/) by Yin et al. (2021), and an original version of the corpus is hosted on [Github](https://github.com/neulab/contextual-mt/tree/master/data/scat) with no licensing information. This dataset contains a curated version of the original corpus where examples containing nested or malformed tags were removed (refer to the [filter_scat.py](filter_scat.py) script for more details).*
### Supported Tasks and Leaderboards
#### Machine Translation
Refer to the [original paper](https://aclanthology.org/2021.acl-long.65/) for additional details on plausibility evaluation for document-level MT systems.
### Languages
The dataset contains source English sentences containing anaphoric pronouns selected from the [OpenSubtitles 2018](https://www.aclweb.org/anthology/L18-1275/) corpus and translated into the French (`fr`)
## Dataset Structure
### Data Instances
The dataset contains a single default configuration. Dataset examples have the following format:
```json
{
"id": 0,
"context_en": "Air, water, the continents. So, what is your project about and what are its chances of winning? - Well, my project is awesome. - Oh, good. I took two plants, and I gave them sun and water",
"en": "But I gave one special attention to see if it would grow more.",
"context_fr": "L'air, l'eau, les continents. Donc, quel est le sujet de ton projet et quelles sont ses chances de gagner ? - Bien, mon projet est impressionnant. - Oh, bien. J'ai pris deux plantes , et je leur ai donné de l'eau et du soleil.",
"fr": "Mais j'ai donné une attention particulière à une pour voir si elle grandit plus.",
"contrast_fr": "Mais j'ai donné une attention particulière à une pour voir si il grandit plus.",
"context_en_with_tags": "Air, water, the continents. So, what is your project about and what are its chances of winning? - Well, my project is awesome. - Oh, good. I took two plants, and I gave them sun and water",
"en_with_tags": "But I gave one special attention to see if <p>it</p> would grow more.",
"context_fr_with_tags": "L'air, l'eau, les continents. Donc, quel est le sujet de ton projet et quelles sont ses chances de gagner ? - Bien, mon projet est impressionnant. - Oh, bien. J'ai pris deux <hon>plantes<hoff> , et je leur ai donné de l'eau et du soleil.",
"fr_with_tags": "Mais j'ai donné une attention particulière à une pour voir si <p>elle</p> grandit plus.",
"contrast_fr_with_tags": "Mais j'ai donné une attention particulière à une pour voir si <p>il</p> grandit plus.",
"has_supporting_context": true,
"has_supporting_preceding_context": true,
}
```
In every example, the pronoun of interest and its translation are surrounded by `<p>...</p>` tags. These are guaranteed to be found in the `en_with_tags` and `fr_with_tags` field, respectively.
Any span surrounded by `<hon>...<hoff>` tags was identified by human annotators as supporting context to correctly translate the pronoun of interest. These spans can be missing altogether (i.e. no contextual information needed), or they can be found in any of the available fields. The `has_supporting_context` field indicates whether the example contains any supporting context.
In the example above, the translation of the pronoun `it` (field `en`) is ambiguous, and the correct translation to the feminine French pronoun `elle` (in field `fr`) is only possible thanks to the supporting feminine noun `plantes` in the field `context_fr`. Since the example contains supporting context, the `has_supporting_context` field is set to `true`. Since the context is not in the same sentence as the target pronoun but in a preceding one, `has_supporting_preceding_context` is also true.
Fields with the `_with_tags` suffix contain tags around pronouns of interest and supporting context, while their counterparts without the suffix contain the same text without tags, to facilitate direct usage with machine translation models.
### Data Splits
The dataset is split into `train`, `validation` and `test` sets. In the following table, we report the number of examples in the original dataset and in this filtered version in which examples containing malformed tags were removed.
| Split | # Examples (original) | # Examples (**this**) |
| :-----------: | :-------------------: | :-------------------: |
| `train` | 11471 | 6086 |
| `validation` | 145 | 66 |
| `test` | 1000 | 500 |
### Dataset Creation
From the original paper:
>We recruited 20 freelance English-French translators on Upwork. We annotate examples from the contrastive test set by Lopes et al. (2020). This set includes 14K examples from the OpenSubtitles2018 dataset. Through our annotation effort, we obtain 14K examples of supporting context for pronoun anaphora resolution in ambiguous translations selected by professional human translators.
Please refer to the original article [Do Context-Aware Translation Models Pay the Right Attention?](https://aclanthology.org/2021.acl-long.65/) for additional information on dataset creation.
### Additional Preprocessing
Compared to the original SCAT corpus, the following differences are present in this version:
- Examples were filtered using the [filter_scat.py](filter_scat.py) script to retain only examples containing well-formed tags, and remove superfluous tags. Superfluous tags are defined as nested `<hon><p>...</p><hoff>` tags that represent lack of contextual information for disambiguating the correct pronoun. In this case, the outer `<hon>...<hoff>` tag was removed. We also filter examples that do not contain tags for the pronoun on interest in the target sentence for English and French (`en`, `fr`).
- Sentences stripped from tags are provided in fields without the `_with_tags` suffix.
- Tags erroneously embedded within words of target sentence (e.g. `just s<p>it</p>ting there, wa<p>it</p>ing for his face`) were removed.
- An extra contrastive sentence using the pronoun of interest belonging to the opposite gender is available in the `contrast_fr` field. The swap was performed using a simple lexical heuristic (refer to `swap_pronoun` in [`scat.py`](./scat.py)), and we do not guarantee grammatical correctness of the sentence.
The test set was manually curated to include 250 examples with supporting context in the previous sentences (`has_supporting_preceding_context` is `true`), and 250 examples with supporting context in the same sentence (`has_supporting_preceding_context` is `false`).
## Additional Information
### Dataset Curators
The original authors of SCAT are the curators of the original released dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
The dataset license is unknown.
### Citation Information
Please cite the authors if you use these corpus in your work.
```bibtex
@inproceedings{yin-etal-2021-context,
title = "Do Context-Aware Translation Models Pay the Right Attention?",
author = "Yin, Kayo and
Fernandes, Patrick and
Pruthi, Danish and
Chaudhary, Aditi and
Martins, Andr{\'e} F. T. and
Neubig, Graham",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.65",
doi = "10.18653/v1/2021.acl-long.65",
pages = "788--801",
}
``` |
false | # Dataset Card for Softcatala-Web-Texts-Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.softcatala.org/
- **Repository:** https://github.com/Softcatala/softcatala-web-dataset
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This repository contains Sofcatalà web site content (articles and programs descriptions).
Dataset size:
* articles.json contains 623 articles with 373233 words.
* programes.json contains 330 program descriptions with 49868 words.
The license of the data is Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) or Universal Public Domain Dedication (CC0 1.0)
### Supported Tasks and Leaderboards
### Languages
Catalan (`ca`).
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
JSON (name/value pairs) format with the following fields: content, date, id and title.
### Data Splits
One file for the descriptions text and one for the articles text.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Softcatalà community.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[@softcatala](https://github.com/Softcatala)
[@jordimas](https://github.com/jordimas)
### Licensing Information
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
[CC0-1.0](https://creativecommons.org/share-your-work/public-domain/cc0/).
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
false |
# Dataset Card for ChatMed
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/michaelwzhu/ChatMed-Datasets
- **Repository:** https://github.com/michael-wzhu/ChatMed
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Wei Zhu
### Dataset Summary
ChatMed-Dataset is a dataset of 110,113 medical query-response pairs (in Chinese) generated by OpenAI's `GPT-3.5` engine. The queries are crawled from several online medical consultation sites, reflecting the medical needs in the real world. The responses are generated by the OpenAI engine. This dataset is designated to to inject medical knowledge into Chinese large language models.
The dataset size growing rapidly. Stay tuned for updates!
在大模型时代,模型都是趋同的,比如开源社区由于LlaMa的"开源",出现了各种stanford-alpaca等一系列的开源模型。但是中文垂直领域上的AGI数据集相对匮乏。
本数据集, ChatMed-Dataset, 中的query(或者是prompt)来自于互联网上的医疗问诊问题(110,113),反映了真实世界的不同用户/患者的医疗问诊需求。目前response都是由OpenAI `GPT-3.5`引擎回答的。我们后续会对互联网上的医生回答与患者回答进行筛选甄别,择优选择,构建质量更优的数据集。
### Supported Tasks and Leaderboards
The ChatMed-Dataset designed for fine-tuning pretrained language models to perform better in automatic medical consultations.
### Languages
The data in ChatMed-Dataset are in Chinese (zh).
## Dataset Structure
### Data Instances
The dataset is in json-line format and is very convenient to use! An example in the ChatMed-Dataset looks as follows:
```json
{
"query": "你好,视神经萎缩,黄斑变性。怎样缓解,及日常注意事项。谢谢。",
"response": "视神经萎缩和黄斑变性是两种不同的疾病,需要分别进行治疗。\n\n视神经萎缩是指视神经逐渐受损,导致视力逐渐减退,常见的治疗方法包括:\n\n1.药物治疗:如促进视神经细胞再生的药物,改善视网膜供血的药物等。\n\n2.手术治疗:如视神经萎缩的手术治疗可以通过神经移植、修复和再生等手术方式来缓解症状。\n\n3.康复训练:通过视力训练等康复训练可以帮助患者恢复视力。\n\n黄斑变性则是指黄斑区域受损,导致中央视力减退,常见的治疗方法包括:\n\n1.药物治疗:如抗氧化剂、维生素等可以缓解症状。\n\n2.手术治疗:如黄斑脱离手术等可以恢复视力。\n\n3.日常注意事项:保持良好的生活习惯,如戒烟、控制血压、血糖等,避免暴晒和过度用眼,保持充足的睡眠和均衡的饮食等。\n\n需要根据具体情况选择合适的治疗方案,建议您咨询专业医生进行诊断和治疗。"
}
```
### Data Fields
The data fields are as follows:
* `query`: real-life medical query from the web.
* `response`: response generated by the OpenAI `GPT-3.5` engine.
### Data Splits
| | train |
|---------------|------:|
| ChatMed-Dataset | 110,113 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on how large language models responses to real-life medical query.
> More social impacts will be discussed soon!
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `ChatMed-Dataset` data is generated by a language model (`GPT-3.5`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Wei Zhu },
title = {ChatMed-Dataset: An GPT generated medical query-response datasets for medcial large language models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/michael-wzhu/ChatMed}},
}
```
### Contributions
[More Information Needed] |
true |
# Dataset Card for Swiss Legislation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Legislation is a multilingual, diachronic dataset of 36K Swiss laws. This dataset is part of a challenging Information Retreival task.
### Supported Tasks and Leaderboards
### Languages
The total number of texts in the dataset is 35,698. The dataset is saved in _lexfind_v2.jsonl_ format.
Switzerland has four official languages German, French, Italian and Romanch with some additional English laws being represenated. Laws are written by legal experts.
36K & 18K & 11K & 6K & 534 & 207
| Language | Subset | Number of Documents |
|------------|------------|----------------------|
| German | **de** | 18K |
| French | **fr** | 11K |
| Italian | **it** | 6K |
| Romanch | **rm** | 534 |
| English | **en** | 207 |
## Dataset Structure
### Data Fields
Each entry in the dataset is a dictionary with the following keys:
- `canton`: the canton of origin of the legislation
- example: "ag"
- `language`: the language of the legislation
- example: "de"
- `uuid`: a unique identifier for the legislation
- example: "ec312f57-05fe-4552-ba50-8c9c269e0f3b"
- `title`: the title of the legislation
- example: "Gesetz über die Geoinformation im Kanton Aargau"
- `short`: a short description of the legislation
- example: "Kantonales Geoinformationsgesetz"
- `abbreviation`: an abbreviation for the legislation
- example: "KGeoIG"
- `sr_number`: a reference number for the legislation
- example: "740.100"
- `is_active`: whether the legislation is currently in force
- example: true
- `version_active_since`: the date since when the legislation's current version is active
- example: "2021-09-01"
- `family_active_since`: the date since when the legislation's current version's family is active
- example: "2011-05-24"
- `version_inactive_since`: the date since when the legislation's current version is inactive
- example: null
- `version_found_at`: the date the legislation's current version was found
- example: "2021-09-01"
- `pdf_url`: a link to the legislation's pdf
- example: "https://www.lexfind.ch/tol/1557/de"
- `html_url`: a link to the legislation's html
- example: "https://gesetzessammlungen.ag.ch/app/de/texts_of_law/740.100")_
- `pdf_content`: the legislation's pdf content
- example: "740.100 - Gesetz über..."
- `html_content`: the legislation's html content
- example: ""
- `changes`: a list of changes made to the legislation
- example: []
- `history`: a list of the legislation's history
- example: []
- `quotes`: a list of quotes from the legislation
- example: []
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
1. 'ch': Switzerland (Federal) - 15840
2. 'fr': Fribourg - 1633
3. 'be': Bern - 1344
4. 'vs': Valais - 1328
5. 'gr': Graubünden - 1205
6. 'ne': Neuchâtel - 1115
7. 'zh': Zurich - 974
8. 'bs': Basel-Stadt - 899
9. 'bl': Basel-Landschaft - 863
10. 'vd': Vaud - 870
11. 'ge': Geneva - 837
12. 'sg': St. Gallen - 764
13. 'ju': Jura - 804
14. 'zg': Zug - 632
15. 'ti': Ticino - 627
16. 'lu': Lucerne - 584
17. 'so': Solothurn - 547
18. 'ow': Obwalden - 513
19. 'ik': Interkantonal - 510
20. 'sh': Schaffhausen - 469
21. 'gl': Glarus - 467
22. 'tg': Thurgau - 453
23. 'sz': Schwyz - 423
24. 'ai': Appenzell Innerrhoden - 416
25. 'ag': Aargau - 483
26. 'ar': Appenzell Ausserrhoden - 330
27. 'nw': Nidwalden - 401
28. 'ur': Uri - 367
29.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
*Visu, Ronja, Joel*
*Title: Blabliblablu*
*Name of conference*
```
cit
``` |
false | # Dataset Card for "medqa"
MedQA GitHub: [https://github.com/jind11/MedQA](https://github.com/jind11/MedQA)
## Citation information
```
@article{jin2020disease,
title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={arXiv preprint arXiv:2009.13081},
year={2020}
}
``` |
false |
# LLaVA Visual Instruct 150K Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct 150K is a set of GPT-generated multimodal instruction-following data.
It is constructed for visual instruction tuning and for building large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct 150K was collected in April 2023, by prompting GPT-4-0314 API.
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Attribution-NonCommercial 4.0 International
It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
false |
## Description
This dataset contains triples of the form "query1", "query2", "label" where labels are mapped as follows
- similar: 1
- not similar: 0
- ambiguous: -1 |
false | # Dataset Card Nota Lyd- og tekstdata
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Disclaimer](#disclaimer)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** https://sprogteknologi.dk/dataset/notalyd-ogtekstdata
- **Data Storage Url:** https://sprogtek-ressources.digst.govcloud.dk/nota/
- **Point of Contact:** info@sprogteknologi.dk
### Dataset Summary
This data was created by the public institution Nota (https://nota.dk/), which is part of the Danish Ministry of Culture. Nota has a library audiobooks and audiomagazines for people with reading or sight disabilities. Nota also produces a number of audiobooks and audiomagazines themselves.
The dataset consists of .wav and .txt files from Nota's audiomagazines "Inspiration" and "Radio/TV".
The dataset has been published as a part of the initiative sprogteknologi.dk, within the Danish Agency for Digital Government (www.digst.dk).
336 GB available data, containing voice recordings and accompanying transcripts.
Each publication has been segmented into bits of 2 - 50 seconds .wav files with an accompanying transcription
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Danish
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called path and its sentence.
`
{'path': '<path_to_clip>.wav', 'sentence': 'Dette er et eksempel', 'audio': {'path': <path_to_clip>.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 44100}
`
### Data Fields
path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence that was read by the speaker
### Data Splits
The material has for now only a train split. As this is very early stage of the dataset, splits might be introduced at a later stage.
## Dataset Creation
### Disclaimer
There might be smaller discrepancies between the .wav and .txt files. Therefore, there might be issues in the alignment of timestamps, text and sound files.
There are no strict rules as to how readers read aloud non-letter characters (i.e. numbers, €, $, !, ?). These symbols can be read differently throughout the dataset.
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset is made public and free to use. Recorded individuals has by written contract accepted and agreed to the publication of their recordings.
Other names appearing in the dataset are already publically known individuals (i.e. TV or Radio host names). Their names are not to be treated as sensitive or personal data in the context of this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://sprogteknologi.dk/
Contact info@sprogteknologi.dk if you have questions regarding use of data.
They gladly receive inputs and ideas on how to distribute the data.
### Licensing Information
[CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/)
### |
true |
# Dataset Card for pile-pii-scrubadub
## Dataset Description
- **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives**
- **Paper: Arxiv link to be added**
### Dataset Summary
This dataset contains text from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on the toxicity of each sentence.
Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the toxicity predicted by the [Detoxify](https://github.com/unitaryai/detoxify).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is taken from [The Pile](https://huggingface.co/datasets/the_pile), which is English text.
## Dataset Structure
### Data Instances
1949977
### Data Fields
- texts (sequence): a list of the sentences in the document, segmented using SpaCy
- meta (dict): the section of [The Pile](https://huggingface.co/datasets/the_pile) from which it originated
- scores (sequence): a score for each sentence in the `texts` column indicating the toxicity predicted by [Detoxify](https://github.com/unitaryai/detoxify)
- avg_score (float64): the average of the scores listed in the `scores` column
- num_sents (int64): the number of sentences (and scores) in that document
### Data Splits
Training set only
## Dataset Creation
### Curation Rationale
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile), a large dataset of text in English. The text is scored for toxicity so that generative language models can be trained to avoid generating toxic text.
### Source Data
#### Initial Data Collection and Normalization
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile).
#### Who are the source language producers?
Please see [The Pile](https://huggingface.co/datasets/the_pile) for the source of the dataset.
### Annotations
#### Annotation process
Each sentence was scored using [Detoxify](https://github.com/unitaryai/detoxify), which is a toxic comment classifier.
We used the `unbiased` model which is based on the 124M parameter [RoBERTa](https://arxiv.org/abs/1907.11692) and trained on the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification).
#### Who are the annotators?
[Detoxify](https://github.com/unitaryai/detoxify)
### Personal and Sensitive Information
This dataset contains all personal identifable information and toxic text that was originally contained in [The Pile](https://huggingface.co/datasets/the_pile).
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contains examples of toxic text and personal identifiable information.
(A version of this datatset with personal identifiable information annotated is [available here](https://huggingface.co/datasets/tomekkorbak/pile-pii-scrubadub).)
Please take care to avoid misusing the toxic text or putting anybody in danger by publicizing their information.
This dataset is intended for research purposes only. We cannot guarantee that all toxic text has been detected, and we cannot guarantee that models trained using it will avoid generating toxic text.
We do not recommend deploying models trained on this data.
### Discussion of Biases
This dataset contains all biases from The Pile discussed in their paper: https://arxiv.org/abs/2101.00027
### Other Known Limitations
The toxic text in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.
## Additional Information
### Dataset Curators
[The Pile](https://huggingface.co/datasets/the_pile)
### Licensing Information
From [The Pile](https://huggingface.co/datasets/the_pile): PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
### Citation Information
Paper information to be added
### Contributions
[The Pile](https://huggingface.co/datasets/the_pile) |
true |
# ParaDetox: Detoxification with Parallel Data (English). Toxicity Task Results
This repository contains information about **Toxicity Task** markup from [English Paradetox dataset](https://huggingface.co/datasets/s-nlp/paradetox) collection pipeline.
The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
## ParaDetox Collection Pipeline
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
* *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
* *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
* *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
Specifically this repo contains the results of **Task 3: Toxicity Check**. Here, the samples with markup confidence >= 90 are present.
The input here is text and the label shows if the text is toxic or not.
Totally, datasets contains 26,507 samples. Among them, the minor part is toxic examples (4,009 pairs).
## Citation
```
@inproceedings{logacheva-etal-2022-paradetox,
title = "{P}ara{D}etox: Detoxification with Parallel Data",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Ustyantsev, Sergey and
Moskovskiy, Daniil and
Dale, David and
Krotova, Irina and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.469",
pages = "6804--6818",
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
}
```
## Contacts
For any questions, please contact: Daryna Dementieva (dardem96@gmail.com) |
false | # Dataset Card for "cifar10-outlier"
📚 This dataset is an enriched version of the [CIFAR-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
*This dataset is used in an articel currently under review - a link will provided asap.*
## Explore the Dataset
The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) allows you to explorer this dataset:

You can find a Huggin Face Space running Spotlight with this dataset here: <https://huggingface.co/spaces/renumics/cifar10-outlier>
Or you can explorer it locally:
```python
!pip install renumics-spotlight datasets
from renumics import spotlight
import datasets
ds = datasets.load_dataset("renumics/cifar10-outlier", split="train")
df = ds.rename_columns({"img": "image", "label": "labels"}).to_pandas()
df["label_str"] = df["labels"].apply(lambda x: ds.features["label"].int2str(x))
dtypes = {
"nn_image": spotlight.Image,
"image": spotlight.Image,
"embedding_ft": spotlight.Embedding,
"embedding_foundation": spotlight.Embedding,
}
spotlight.show(
df,
dtype=dtypes,
layout="https://spotlight.renumics.com/resources/layout_pre_post_ft.json",
)
```
|
false |
This is a test dataset |
true |
# Dataset Card for [products-2017]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [LSPCv2 Homepage](http://webdatacommons.org/largescaleproductcorpus/v2/index.html)
- **Point of Contact:** [Ralph Peeters](mailto:ralph.peeters@uni-mannheim.de)
### Dataset Summary
Many e-shops have started to mark-up product data within their HTML pages using the schema.org vocabulary. The Web Data Commons project regularly extracts such data from the Common Crawl, a large public web crawl. The Web Data Commons Training and Test Sets for Large-Scale Product Matching contain product offers from different e-shops in the form of binary product pairs (with corresponding label "match" or "no match")
In order to support the evaluation of machine learning-based matching methods, the data is split into training, validation and test set. We provide training and validation sets in four different sizes for four product categories. The labels of the test sets were manually checked while those of the training sets were derived using shared product identifiers from the Web via weak supervision.
The data stems from the WDC Product Data Corpus for Large-Scale Product Matching - Version 2.0 which consists of 26 million product offers originating from 79 thousand websites.
### Supported Tasks and Leaderboards
Entity Matching, Product Matching
### Languages
English
## Dataset Structure
### Data Instances
The data is structured as pairs of product offers with the corresponding match/non-match label. This is an example instance from the computers category:
```
{"pair_id":"581109#16637861","label":0,"id_left":581109,"category_left":"Computers_and_Accessories","cluster_id_left":1324529,"brand_left":"\"Gigabyte\"@en","title_left":" \"Gigabyte Radeon RX 480 G1 Gaming 4096MB GDDR5 PCI-Express Graphics Card\"@en \"Gigabyte Gr| OcUK\"@en","description_left":"\"GV-RX480G1 GAMING-4GD, Core Clock: 1202MHz, Boost Clock: 1290MHz, Memory: 4096MB 7000MHz GDDR5, Stream Processors: 2304, Crossfire Ready, VR Ready, FreeSync Ready, 3 Years Warranty\"@en ","price_left":null,"specTableContent_left":null,"id_right":16637861,"category_right":"Computers_and_Accessories","cluster_id_right":107415,"brand_right":"\"Gigabyte\"@en","title_right":" \"Gigabyte Radeon RX 550 Gaming OC 2048MB GDDR5 PCI-Express Graphics Card\"@en \"Gigabyte Gr| OcUK\"@en","description_right":"\"GV-RX550GAMING OC-2GD, Boost: 1219MHz, Memory: 2048MB 7000MHz GDDR5, Stream Processors: 512, DirectX 12 Support, 3 Years Warranty\"@en ","price_right":null,"specTableContent_right":null}
```
### Data Fields
- pair_id: unique identifier of a pair (string)
- label: binary label, match or non-match (int)
The following attributes are contained twice, once for the first and once for the second product offer
- id: unique id of the product offer (int)
- category: product category (string)
- cluster_id: id of the product cluster from the original corpus this offer belongs to (int)
- brand: brand of the product (string)
- title: product title (string)
- description: longer product description (string)
- price: price of the product offer (string)
- specTableContent: additional data found in specification tables on the webpage that contains the product offer (string)
### Data Splits
- Computers
- Test set - 1100 pairs
- Small Train set - 2267 pairs
- Small Validation set - 567 pairs
- Medium Train set - 6475 pairs
- Medium Validation set - 1619 pairs
- Large Train set - 26687 pairs
- Large Validation set - 6672 pairs
- XLarge Train set - 54768 pairs
- Xlarge Validation set - 13693 pairs
- Cameras
- Test set - 1100 pairs
- Small Train set - 1508 pairs
- Small Validation set - 378 pairs
- Medium Train set - 4204 pairs
- Medium Validation set - 1051 pairs
- Large Train set - 16028 pairs
- Large Validation set - 4008 pairs
- XLarge Train set - 33821 pairs
- Xlarge Validation set - 8456 pairs
- Watches
- Test set - 1100 pairs
- Small Train set - 1804 pairs
- Small Validation set - 451 pairs
- Medium Train set - 5130 pairs
- Medium Validation set - 1283 pairs
- Large Train set - 21621 pairs
- Large Validation set - 5406 pairs
- XLarge Train set - 49255 pairs
- Xlarge Validation set - 12314 pairs
- Shoes
- Test set - 1100 pairs
- Small Train set - 1650 pairs
- Small Validation set - 413 pairs
- Medium Train set - 4644 pairs
- Medium Validation set - 1161 pairs
- Large Train set - 18391 pairs
- Large Validation set - 4598 pairs
- XLarge Train set - 33943 pairs
- Xlarge Validation set - 8486 pairs
## Dataset Creation
### Annotations
#### Annotation process
- Training and Validation sets: distant supervision via shared schema.org product IDs
- Test sets: Single expert annotator
#### Who are the annotators?
[Ralph Peeters](https://www.uni-mannheim.de/dws/people/researchers/phd-students/ralph-peeters/)
## Additional Information
### Citation Information
```
@inproceedings{primpeli2019wdc,
title={The WDC training dataset and gold standard for large-scale product matching},
author={Primpeli, Anna and Peeters, Ralph and Bizer, Christian},
booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
pages={381--386},
year={2019}
}
```
|
true |
# Dataset Card for SemEval Task 12: Aspect-based Sentiment Analysis
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is orignally from [SemEval-2015 Task 12](https://alt.qcri.org/semeval2015/task12/).
From the page:
> SE-ABSA15 will focus on the same domains as SE-ABSA14 (restaurants and laptops). However, unlike SE-ABSA14, the input datasets of SE-ABSA15 will contain entire reviews, not isolated (potentially out of context) sentences. SE-ABSA15 consolidates the four subtasks of SE-ABSA14 within a unified framework. In addition, SE-ABSA15 will include an out-of-domain ABSA subtask, involving test data from a domain unknown to the participants, other than the domains that will be considered during training. In particular, SE-ABSA15 consists of the following two subtasks.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. |
true | # Dataset Card for HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese
## Dataset Description
- **Homepage:** http://143.107.183.175:14581/
- **Repository:** https://github.com/franciellevargas/HateBR
- **Paper:** https://aclanthology.org/2022.lrec-1.777/
- **Leaderboard:**
- **Point of Contact:** https://franciellevargas.github.io/
### Dataset Summary
HateBR is the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection on the web and social media. The HateBR corpus was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of F1-score outperforming the current literature models for the Portuguese language. Accordingly, we hope that the proposed expertly annotated corpus may foster research on hate speech and offensive language detection in the Natural Language Processing area.
**Relevant Links:**
* [**Demo: Brasil Sem Ódio**](http://143.107.183.175:14581/)
* [**MOL - Multilingual Offensive Lexicon Annotated with Contextual Information**](https://github.com/franciellevargas/MOL)
### Supported Tasks and Leaderboards
Hate Speech Detection
### Languages
Portuguese
## Dataset Structure
### Data Instances
```
{'instagram_comments': 'Hipocrita!!',
'offensive_language': True,
'offensiveness_levels': 2,
'antisemitism': False,
'apology_for_the_dictatorship': False,
'fatphobia': False,
'homophobia': False,
'partyism': False,
'racism': False,
'religious_intolerance': False,
'sexism': False,
'xenophobia': False,
'offensive_&_non-hate_speech': True,
'non-offensive': False,
'specialist_1_hate_speech': False,
'specialist_2_hate_speech': False,
'specialist_3_hate_speech': False
}
```
### Data Fields
* **instagram_comments**: Instagram comments.
* **offensive_language**: A classification of comments as either offensive (True) or non-offensive (False).
* **offensiveness_levels**: A classification of comments based on their level of offensiveness, including highly offensive (3), moderately offensive (2), slightly offensive (1) and non-offensive (0).
* **antisemitism**: A classification of whether or not the comment contains antisemitic language.
* **apology_for_the_dictatorship**: A classification of whether or not the comment praises the military dictatorship period in Brazil.
* **fatphobia**: A classification of whether or not the comment contains language that promotes fatphobia.
* **homophobia**: A classification of whether or not the comment contains language that promotes homophobia.
* **partyism**: A classification of whether or not the comment contains language that promotes partyism.
* **racism**: A classification of whether or not the comment contains racist language.
* **religious_intolerance**: A classification of whether or not the comment contains language that promotes religious intolerance.
* **sexism**: A classification of whether or not the comment contains sexist language.
* **xenophobia**: A classification of whether or not the comment contains language that promotes xenophobia.
* **offensive_&_no-hate_speech**: A classification of whether or not the comment is offensive but does not contain hate speech.
* **specialist_1_hate_speech**: A classification of whether or not the comment was annotated by the first specialist as hate speech.
* **specialist_2_hate_speech**: A classification of whether or not the comment was annotated by the second specialist as hate speech.
* **specialist_3_hate_speech**: A classification of whether or not the comment was annotated by the third specialist as hate speech.
### Data Splits
The original authors of the dataset did not propose a standard data split. To address this, we use the [multi-label data stratification technique](http://scikit.ml/stratification.html) implemented at the scikit-multilearn library to propose a train-validation-test split. This method considers all classes for hate speech in the data and attempts to balance the representation of each class in the split.
| name |train|validation|test|
|---------|----:|----:|----:|
|hatebr|4480|1120|1400|
## Considerations for Using the Data
### Discussion of Biases
Please refer to [the HateBR paper](https://aclanthology.org/2022.lrec-1.777/) for a discussion of biases.
### Licensing Information
The HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of [SINCH](https://www.sinch.com/).
### Citation Information
```
@inproceedings{vargas2022hatebr,
title={HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection},
author={Vargas, Francielle and Carvalho, Isabelle and de G{\'o}es, Fabiana Rodrigues and Pardo, Thiago and Benevenuto, Fabr{\'\i}cio},
booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference},
pages={7174--7183},
year={2022}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [Enhancement to Low Resource Text Classification via Sequential Transfer Learning](#)
- **Leaderboard:**
- **Point of Contact:** [Neil Riego](mailto:neilchristianriego3@gmail.com)
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A typical data point, comprises of a text and the corresponding label.
An example from the YelpReviewFull test set looks as follows:
```
{
'label': pos,
'text': 'Huyyy ang gandaaaaaaaaaaa. Grabe sobrang ganda talaga wala ako masabi. Complete orders pa pinadala sa akin. Buti hindi nabasag kahit walang bubble wrap. Okay na lang din para save mother earth and at least hindi nabasag hehe. Oorder ulit ako ang ganda eh'
}
```
### Data Fields
- 'text': The review texts are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes ("").
- 'label': Corresponds to the score associated with the review (between positive and negative).
### Data Splits
The Shopee reviews tl binary dataset is constructed by randomly taking 14000 training samples and 3000 samples for testing and validation for each review star from neg and pos.
In total there are 28000 training samples and 6000 each in validation and testing samples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# Dataset Card for LUDWIG
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository: https://github.com/ucl-dark/ludwig**
- **Paper: TODO**
- **Leaderboard: TODO**
- **Point of Contact: Laura Ruis**
### Dataset Summary
LUDWIG (**L**anguage **U**nderstanding **W**ith **I**mplied meanin**G**) is a dataset containing English conversational implicatures.
Implicature is the act of meaning or implying one thing by saying something else.
There's different types of implicatures, from simple ones like "Some guests came to the party"
(implying not all guests came) to more complicated implicatures that depend on context like
"A: Are you going to the party this Friday? B: There's a global pandemic.", implying no. Implicatures serve a wide range of
goals in communication: efficiency, style, navigating social interactions, and more. We cannot fully
understand utterances without understanding their implications.
The implicatures in this dataset are conversational because they come in utterance-response tuples.
Each tuple has an implicature associated with it,
which is the implied meaning of the response. For example:
Utterance: Are you going to the party this Friday?
Response: There's a global pandemic.
Implicature: No.
This dataset can be used to evaluate language models on their pragmatic language understanding.
### Supported Tasks and Leaderboards
- ```text-generation```: The dataset can be used to evaluate a models ability to generate the correct next token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means" the correct completion would be "no". Success in this task can be determined by the ability to generate the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes").
- ```fill-mask```: The dataset can be used to evaluate a models ability to fill the correct token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means [mask]" the correct mask-fill would be "no". Success in this task can be determined by the ability to fill the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes").
### Languages
English
## Dataset Structure
### Data Instances
Find below an example of a 1-shot example instance (1-shot because there's 1 prompt example).
```
{
"id": 1,
"utterance": "Are you going to the party this Friday?",
"response": "There's a global pandemic.",
"implicature": "No.",
"incoherent_implicature": "Yes".
"prompts": [
{
"utterance": "Was that hot?",
"response": "The sun was scorching.",
"implicature": "Yes.",
"incoherent_implicature": "No.".
}
]
}
```
### Data Fields
```
{
"id": int, # unique identifier of data points
"utterance": str, # the utterance in this example
"response": str, # the response in this example
"implicature": str, # the implied meaning of the response, e.g. 'yes'
"incoherent_implicature": str, # the wrong implied meaning, e.g. 'no'
"prompts": [ # optional: prompt examples from the validation set
{
"utterance": str,
"response": str,
"implicature": str,
"incoherent_implicature": str,
}
]
}
```
### Data Splits
**Validation**: 118 instances that can be used for finetuning or few-shot learning
**Test**: 600 instances that can be used for evaluating models.
NB: the splits weren't originally part of the paper that presents this dataset. The same goes for the k-shot prompts. Added
by @LauraRuis.
## Dataset Creation
### Curation Rationale
Pragmatic language understanding is a crucial aspect of human communication, and implicatures are the primary object of study in this field.
We want computational models of language to understand all the speakers implications.
### Source Data
#### Initial Data Collection and Normalization
"Conversational implicatures in English dialogue: Annotated dataset", Elizabeth Jasmi George and Radhika Mamidi 2020.
[Link to paper](https://doi.org/10.1016/j.procs.2020.04.251)
#### Who are the source language producers?
These written representations of the utterances are collected manually by scraping and transcribing from relevant sources from August, 2019 to August, 2020. The source of dialogues in the data include TOEFL listening comprehension short conversations, movie dialogues from IMSDb and websites explaining idioms, similes, metaphors and hyperboles. The implicatures are annotated manually.
### Annotations
#### Annotation process
Manually annotated by dataset collectors.
#### Who are the annotators?
Authors of the original paper.
### Personal and Sensitive Information
All the data is public and not sensitive.
## Considerations for Using the Data
### Social Impact of Dataset
Any application that requires communicating with humans requires pragmatic language understanding.
### Discussion of Biases
Implicatures can be biased to specific cultures. For example, whether the Pope is Catholic (a common used response implicature to indicate "yes") might not be common knowledge for everyone.
Implicatures are also language-specific, the way people use pragmatic language depends on the language. This dataset only focuses on the English language.
### Other Known Limitations
None yet.
## Additional Information
### Dataset Curators
Elizabeth Jasmi George and Radhika Mamidi
### Licensing Information
[license](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@article{George:Mamidi:2020,
author = {George, Elizabeth Jasmi and Mamidi, Radhika},
doi = {10.1016/j.procs.2020.04.251},
journal = {Procedia Computer Science},
keywords = {},
note = {https://doi.org/10.1016/j.procs.2020.04.251},
number = {},
pages = {2316-2323},
title = {Conversational implicatures in English dialogue: Annotated dataset},
url = {https://app.dimensions.ai/details/publication/pub.1128198497},
volume = {171},
year = {2020}
}
```
### Contributions
Thanks to [@LauraRuis](https://github.com/LauraRuis) for adding this dataset. |
false |
<div align="center">
<img width="640" alt="keremberke/indoor-scene-classification" src="https://huggingface.co/datasets/keremberke/indoor-scene-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['meeting_room', 'cloister', 'stairscase', 'restaurant', 'hairsalon', 'children_room', 'dining_room', 'lobby', 'museum', 'laundromat', 'computerroom', 'grocerystore', 'hospitalroom', 'buffet', 'office', 'warehouse', 'garage', 'bookstore', 'florist', 'locker_room', 'inside_bus', 'subway', 'fastfood_restaurant', 'auditorium', 'studiomusic', 'airport_inside', 'pantry', 'restaurant_kitchen', 'casino', 'movietheater', 'kitchen', 'waitingroom', 'artstudio', 'toystore', 'kindergarden', 'trainstation', 'bedroom', 'mall', 'corridor', 'bar', 'classroom', 'shoeshop', 'dentaloffice', 'videostore', 'laboratorywet', 'tv_studio', 'church_inside', 'operating_room', 'jewelleryshop', 'bathroom', 'clothingstore', 'closet', 'winecellar', 'livingroom', 'nursery', 'gameroom', 'inside_subway', 'deli', 'bakery', 'library', 'prisoncell', 'gym', 'concert_hall', 'greenhouse', 'elevator', 'poolinside', 'bowling']
```
### Number of Images
```json
{'train': 10885, 'test': 1558, 'valid': 3128}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/indoor-scene-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5](https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5?ref=roboflow2huggingface)
### Citation
```
```
### License
MIT
### Dataset Summary
This dataset was exported via roboflow.com on October 24, 2022 at 4:09 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 15571 images.
Indoor-scenes are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
|
false |
# NB Alpaca Norwegian Bokmål
This dataset is a translation to Norwegian Bokmål of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json), a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca).
An [earlier version](https://huggingface.co/datasets/bertin-project/alpaca-spanish/blob/main/nllb/spa_train.json.gz) used [Facebook's NLLB 1.3B model](https://huggingface.co/facebook/nllb-200-1.3B), but the current version uses OpenAI's `gpt-3.5-turbo`, hence this dataset cannot be used to create models that compete in any way against OpenAI. |
false |
ELI5 paired
This is a processed version of the [eli5](https://huggingface.co/datasets/eli5) dataset. The dataset was created following very closely the steps in the [stack-exchange-paired dataset](https://huggingface.co/datasets/lvwerra/stack-exchange-paired). The following steps were applied:
- Create pairs (response_j, response_k) where j was rated better than k
- Sample at most 10 pairs per question
- Shuffle the dataset globally
This dataset is designed to be used for preference learning using techniques such as Reinforcement Learning from Human Feedback. The processing notebook is in the repository as well.
If you want to construct a "question" column in this data, you can either use just the "title" column, or concatenate the "title" column with the "selftext" column as follows:
```
def get_question(example):
title = example["title"]
selftext = example["selftext"]
if selftext:
if selftext[-1] not in [".", "?", "!"]:
seperator = ". "
else:
seperator = " "
question = title + seperator + selftext
else:
question = title
example["question"] = question
return example
dataset = load_dataset("vincentmin/eli5_askscience_askhistorians_rlhf")
dataset = dataset.map(get_question)
```
For the license, see the [eli5 dataset](https://huggingface.co/datasets/eli5) which states
"The licensing status of the dataset hinges on the legal status of the Pushshift.io data which is unclear."
at the time of creation of this dataset. |
false | # Dataset Card for "extended-trading-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The extended-trading-sp500 dataset contains detailed information on the extended trading of the S&P 500 index.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string representing the date of the trading.
- time (string): A string representing the time of the trading.
- price (float64): A floating-point number representing the price of the stock at the given date and time.
- share_volume (string): A string representing the volume of shares traded during this time.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The extended-trading-sp500 dataset was developed to support research into after-hours trading patterns and behaviors.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from various trading platforms and aggregated for this dataset.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The extended-trading-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The extended-trading-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, extended-trading-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
false |
# Dataset Card for synQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Internal-Datasets homepage](https://github.com/Marbyun/datasets-huggingface)
- **Point of Contact:** [Marbyun](https://huggingface.co/Marbyun)
### Dataset Summary
This Datasets purpose for AI Question-Answering'Datasets. This Dataset inspired by SynQA And SQuAD v1.1 (https://arxiv.org/abs/1606.05250) training set.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
```
{
"data": [
{
"title": "None",
"paragraphs": [
{
"context": "Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.",
"qas": [
{
"id": "689f275aacba6c43ff112b2c7cb16129bfa934fa",
"question": "What material is the statue of Christ made of?",
"answers": [
{
"answer_start": 190,
"text": "organic copper"
}
]
},
{
"id": "73bd3f52f5934e02332787898f6e568d04bc5403",
"question": "Who is on the Main Building's gold dome?",
"answers": [
{
"answer_start": 111,
"text": "the Virgin Mary."
}
]
},
{
"id": "4d459d5b75fd8a6623446290c542f99f1538cf84",
"question": "What kind of statue is at the end of the main drive?",
"answers": [
{
"answer_start": 667,
"text": "modern stone"
}
]
},
{
"id": "987a1e469c5b360f142b0a171e15cef17cd68ea6",
"question": "What type of dome is on the Main Building at Notre Dame?",
"answers": [
{
"answer_start": 79,
"text": "gold"
}
]
}
]
}
]
}
]
}
```
### Data Fields
- title: all "None" in this dataset
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text.
### Data Splits
The dataset is composed of a single split of 314,811 examples that we used in a two-stage fine-tuning process (refer to the paper for further details).
## Dataset Creation
### Curation Rationale
This dataset was created to investigate the effects of using synthetic adversarial data generation to improve robustness of state-of-the-art QA models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250).
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and a BART-Large generative model for the questions.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a support resource for improve the ability of systems t handle questions that contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, selected candidate answers, generated questions, quality re-labelling process, as well as any algorithmic biases that may be exacerbated from the adversarial annotation process used to collect the SQuAD and AdversarialQA data on which the generators were trained.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This Dataset prepared by RnD Team.
### Licensing Information
This dataset is distributed under the [MIT License](https://opensource.org/licenses/MIT).
### Citation Information
```
@inproceedings{Rnd-AI-Team,
title = "Dataset for Develop AI.",
author = "RnD Team,",
booktitle = "",
month = jun,
year = "2023",
address = "",
publisher = "",
url = "",
doi = "",
pages = "",
abstract = "This Dataset prepare by RnD Team for develop AI Question and Answering Chatbot.",
}
``` |
false | |
false |
<div align="center">
<img width="640" alt="manot/football-players" src="https://huggingface.co/datasets/manot/football-players/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['football', 'player']
```
### Number of Images
```json
{'valid': 87, 'train': 119}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("manot/football-players", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z/dataset/1](https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ football-players-2l81z_dataset,
title = { football-players Dataset },
type = { Open Source Dataset },
author = { Konstantin Sargsyan },
howpublished = { \\url{ https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z } },
url = { https://universe.roboflow.com/konstantin-sargsyan-wucpb/football-players-2l81z },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jun },
note = { visited on 2023-06-12 },
}
```
### License
MIT
### Dataset Summary
This dataset was exported via roboflow.com on June 12, 2023 at 10:10 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 206 images.
Players are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
true | # Dataset Card for the American Stories dataset
## Dataset Description
- **Homepage:** Coming Soon
- **Repository:** https://github.com/dell-research-harvard/AmericanStories
- **Paper:** Coming Soon
=- **Point of Contact:** melissa.dell@gmail.com
### Dataset Summary
The American Stories dataset is a collection of full article texts extracted from historical U.S. newspaper images. It includes nearly 20 million scans from the public domain Chronicling America collection maintained by the Library of Congress. The dataset is designed to address the challenges posed by complex layouts and low OCR quality in existing newspaper datasets.
It was created using a novel deep learning pipeline that incorporates layout detection, legibility classification, custom OCR, and the association of article texts spanning multiple bounding boxes. It employs efficient architectures specifically designed for mobile phones to ensure high scalability.
The dataset offers high-quality data that can be utilized for various purposes. It can be used to pre-train large language models and improve their understanding of historical English and world knowledge.
The dataset can also be integrated into retrieval-augmented language models, making historical information more accessible, including interpretations of political events and details about people's ancestors.
Additionally, the structured article texts in the dataset enable the use of transformer-based methods for applications such as detecting reproduced content. This significantly enhances accuracy compared to relying solely on existing OCR techniques.
The American Stories dataset serves as an invaluable resource for developing multimodal layout analysis models and other multimodal applications. Its vast size and silver quality make it ideal for innovation and research in this domain.
### Languages
English (en)
## Dataset Structure
The raw data on this repo contains compressed chunks of newspaper scans for each year. Each scan has its own JSON file named as the {scan_id}.json.
The data loading script takes care of the downloading, extraction, and parsing to outputs of two kinds :
+ Article-Level Output: The unit of the Dataset Dict is an associated article
+ Scan Level Output: The unit of the Dataset Dict is an entire scan with all the raw unparsed data
### Data Instances
Here are some examples of what the output looks like.
#### Article level
```
{
'article_id': '1_1870-01-01_p1_sn82014899_00211105483_1870010101_0773',
'newspaper_name': 'The weekly Arizona miner.',
'edition': '01', 'date': '1870-01-01',
'page': 'p1',
'headline': '',
'byline': '',
'article': 'PREyors 10 leaving San Francisco for Wash ington City, our Governor, A. r. K. Saford. called upon Generals Thomas and Ord and nt the carrying out of what (truncated)'
}
```
#### Scan level
```
{'raw_data_string': '{"lccn": {"title": "The Massachusetts spy, or, Thomas\'s Boston journal.", "geonames_ids": ["4930956"],....other_keys:values}
```
### Data Fields
#### Article Level
+ "article_id": Unique Id for an associated article
+ "newspaper_name": Newspaper Name
+ "edition": Edition number
+ "date": Date of publication
+ "page": Page number
+ "headline": Headline Text
+ "byline": Byline Text
+ "article": Article Text
#### Scan Level
"raw_data_string": Unparsed scan-level data that contains scan metadata from Library of Congress, all content regions with their bounding boxes, OCR text and legibility classification
### Data Splits
There are no train, test or val splits. Since the dataset has a massive number of units (articles or newspaper scans), we have split the data by year. Once the dataset is loaded,
instead of the usual way of accessing a split as dataset["train"], specific years can be accessed using the syntax dataset["year"] where year can be any year between 1774-1963 as long as there is at least one scan for the year.
The data loading script provides options to download both a subset of years and all years at a time.
### Accessing the Data
There are 4 config options that can be used to access the data depending upon the use-case.
```
from datasets import load_dataset
# Download data for the year 1809 at the associated article level (Default)
dataset = load_dataset("dell-research-harvard/AmericanStories",
"subset_years",
year_list=["1809", "1810"]
)
# Download and process data for all years at the article level
dataset = load_dataset("dell-research-harvard/AmericanStories",
"all_years"
)
# Download and process data for 1809 at the scan level
dataset = load_dataset("dell-research-harvard/AmericanStories",
"subset_years_content_regions",
year_list=["1809"]
)
# Download ad process data for all years at the scan level
dataset = load_dataset("dell-research-harvard/AmericanStories",
"all_years_content_regions")
```
## Dataset Creation
### Curation Rationale
The dataset was created to provide researchers with a large, high-quality corpus of structured and transcribed newspaper article texts from historical local American newspapers.
These texts provide a massive repository of information about topics ranging from political polarization to the construction of national and cultural identities to the minutiae of the daily lives of people's ancestors.
The dataset will be useful to a wide variety of researchers including historians, other social scientists, and NLP practitioners.
### Source Data
#### Initial Data Collection and Normalization
The dataset is drawn entirely from image scans in the public domain that are freely available for download from the Library of Congress's website.
We processed all images as described in the associated paper.
#### Who are the source language producers?
The source language was produced by people - by newspaper editors, columnists, and other sources.
### Annotations
#### Annotation process
Not Applicable
#### Who are the annotators?
Not Applicable
### Personal and Sensitive Information
Not Applicable
## Considerations for Using the Data
### Social Impact of Dataset
This dataset provides high-quality data that could be used for pre-training a large language model to achieve better understanding of historical English and historical world knowledge.
The dataset could also be added to the external database of a retrieval-augmented language model to make historical information - ranging from interpretations of political events to minutiae about the lives of people's ancestors - more widely accessible.
Furthermore, structured article texts that it provides can facilitate using transformer-based methods for popular applications like detection of reproduced content, significantly improving accuracy relative to using the existing OCR.
It can also be used for innovating multimodal layout analysis models and other multimodal applications.
### Discussion of Biases
This dataset contains unfiltered content composed by newspaper editors, columnists, and other sources.
In addition to other potentially harmful content, the corpus may contain factual errors and intentional misrepresentations of news events.
All content should be viewed as individuals' opinions and not as a purely factual account of events of the day.
## Additional Information
### Dataset Curators
Melissa Dell (Harvard), Jacob Carlson (Harvard), Tom Bryan (Harvard) , Emily Silcock (Harvard), Abhishek Arora (Harvard), Zejiang Shen (MIT), Luca D'Amico-Wong (Harvard), Quan Le (Princeton), Pablo Querubin (NYU), Leander Heldring (Kellog School of Business)
### Licensing Information
The dataset has a CC-BY 4.0 license
### Citation Information
Coming Soon
### Contributions
Coming Soon
|
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
false | # Dataset Card for HighlightSum Corpus [Single Dataset Comprising of AMI, SamSUM & DialogSUM for Brief Summarization of Text]
## Dataset Description
### Links
- **AMI:** https://huggingface.co/datasets/knkarthick/AMI
- **DialogSUM:** https://github.com/cylnlp/dialogsum
- **SamSUM:** https://huggingface.co/datasets/knkarthick/samsum
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
HighlightSUM is collection of large-scale dialogue summarization dataset from AMI, SamSUM & DialogSUM, consisting of 31,108 dialogues with corresponding manually labeled summaries.
### Languages
English
## Dataset Structure
### Data Instances
HighlightSum is a large-scale dialogue summarization dataset collection, consisting of 31,108 dialogues split into train, test and validation.
The first instance in the training set:
{'id': 'train_0',
'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.",
'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor."}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 27401
- val: 1360
- test: 2347
## Dataset Creation
### Curation Rationale
Collection of AMI, SamSUM & DialogSUM Datasets.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
non-commercial licence: MIT
## Citation Information
Refer the above links for Credits & Citations. |
true | |
false |
# PubLayNet
PubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is [PubMed Central Open Access Subset (commercial use collection)](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper ["PubLayNet: largest dataset ever for document layout analysis."](https://arxiv.org/abs/1908.07836).
The public dataset is in tar.gz format which doesn't fit nicely with huggingface streaming. Modifications have been made to optimise the delivery of the dataset for the hugginface datset api. The original files can be found [here](https://developer.ibm.com/exchanges/data/all/publaynet/).
Licence: [Community Data License Agreement – Permissive – Version 1.0 License](https://cdla.dev/permissive-1-0/)
Author: IBM
GitHub: https://github.com/ibm-aur-nlp/PubLayNet
@article{ zhong2019publaynet,
title = { PubLayNet: largest dataset ever for document layout analysis },
author = { Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno },
journal = { arXiv preprint arXiv:1908.07836},
year. = { 2019 }
} |
false |
# Dataset Card for DocLayNet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
### Supported Tasks and Leaderboards
We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
## Dataset Structure
### Data Fields
DocLayNet provides four types of data assets:
1. PNG images of all pages, resized to square `1025 x 1025px`
2. Bounding-box annotations in COCO format for each PNG image
3. Extra: Single-page PDF files matching each PNG image
4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
The COCO image record are defined like this example
```js
...
{
"id": 1,
"width": 1025,
"height": 1025,
"file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
// Custom fields:
"doc_category": "financial_reports" // high-level document category
"collection": "ann_reports_00_04_fancy", // sub-collection name
"doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
"page_no": 9, // page number in original document
"precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
},
...
```
The `doc_category` field uses one of the following constants:
```
financial_reports,
scientific_articles,
laws_and_regulations,
government_tenders,
manuals,
patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
```
### Contributions
Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.
|
true |
Constraint@AAAI2021 - COVID19 Fake News Detection in English
```
@misc{patwa2020fighting,
title={Fighting an Infodemic: COVID-19 Fake News Dataset},
author={Parth Patwa and Shivam Sharma and Srinivas PYKL and Vineeth Guptha and Gitanjali Kumari and Md Shad Akhtar and Asif Ekbal and Amitava Das and Tanmoy Chakraborty},
year={2020},
eprint={2011.03327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
false | # Dataset Card for "Calc-aqua_rat"
### Summary
This dataset is an instance of [aqua_rat](https://huggingface.co/datasets/aqua_rat) dataset extended for the in-context calls of calculator,
represented by the `exec` calls to a `sympy` library.
### Supported Tasks
The dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can out-source the computations in the reasoning chain to a calculator.
### Construction Process
The dataset was constructed automatically by evaluating all candidate calls to a `sympy` library that were extracted from the originally-annotated
*rationale*s. The selection of candidates is pivoted by the matching of equals ('=') symbols in the chain, where the left-hand side of the equation is evaluated,
and accepted as a correct gadget call, if the result occurs closely on the right-hand side.
Therefore, the extraction of calculator calls may inhibit false negatives (where the calculator could have been used but was not), but not any known
false positives.
A full description of the extraction process can be found in the [corresponding parse script](https://github.com/markcheeky/gadgets/blob/7799a7841940b15593d4667219424ee71c74327e/gadgets/aqua.py#L19),
**If you find an issue in the dataset or in the fresh version of the parsing script, we'd be happy if you report it, or create a PR.**
## Dataset Structure
The dataset can be loaded by simply choosing a split (`train`, `validation` or `test`) and calling:
```python
import datasets
dataset_val = datasets.load_dataset("MU-NLPC/Calc-aqua_rat", split="validation")
print(dataset_val[0]) # see the output below
```
### Data Instances
The samples of Calc-aqua_rat have this format (newline-reformated for better readability):
```python
{'question': 'Three birds are flying at a fast rate of 900 kilometers per hour. What is their speed in miles per minute? [1km = 0.6 miles]',
'options': ['A)32400', 'B)6000', 'C)600', 'D)60000', 'E)10'],
'correct': 'A',
'rationale': 'To calculate the equivalent of miles in a kilometer\n
0.6 kilometers = 1 mile\n
900 kilometers = (0.6)*900 = 540 miles\n
In 1 hour there are 60 minutes\n
Speed in miles/minutes = 60 * 540 = 32400\n
Correct answer - A',
'chain': 'To calculate the equivalent of miles in a kilometer\n
0.6 kilometers \n= 1 mile\n
900 kilometers \n= (0.6)*900\n= \n<gadget id="calculator">(0.6)*900</gadget>\n<output>540</output>\n540 miles\n
In 1 hour there are 60 minutes\n
Speed in miles/minutes\n= 60 * 540\n= \n<gadget id="calculator">60 * 540</gadget>\n<output>32_400</output>\n32400\n
Correct answer - 32400\n.
Final result is <result>32400</result>'
}
```
The enclosing HTML tags (e.g. **`<gadget id="calculator">(0.6)*900</gadget>\n<output>540</output>`**) represent the inputs and outputs
to the `sympy.parse_expr().evalf()` method (in our code [here](https://github.com/markcheeky/gadgets/blob/7799a7841940b15593d4667219424ee71c74327e/gadgets/gadget.py#L28)).
Note that the format of the dataset is consistent with [MU-NLPC/Calc-gsm8k](https://huggingface.co/datasets/MU-NLPC/Calc-gsm8k).
### Data Fields
* **question**: A natural language definition of the problem to solve.
* **options**: 5 possible options (A, B, C, D and E), among which one is correct
* **correct**: The correct option
* **rationale**: A natural language sequence of steps leading to a solution of the given problem.
* **chain**: A natural language sequence of steps with inserted calculator calls and outputs of the sympy calculator.
### Data Splits
The samples in data splits are consistent with the original [aqua_rat](https://huggingface.co/datasets/aqua_rat) dataset, containing:
* **train** split of 97467 samples,
* **validation** split of 254 samples,
* **test** split of 254.
*
## Licensing
Apache-2.0, consistently with the original aqua-rat dataset.
## Cite
If you use this dataset in research, please cite the original [aqua-rat paper](https://arxiv.org/pdf/1705.04146.pdf) and our report as follows:
```bibtex
@article{kadlcik2023calcx,
title={Calc-X: Enriching Arithmetical Chain-of-Thoughts Datasets by Interaction with Symbolic Systems},
author={Marek Kadlčík and Michal Štefánik},
year={2023},
eprint={2305.15017},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
false | # Dataset Card for "timeseries-daily-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The timeseries-daily-sp500 dataset provides daily historical data for companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for systematic trading. The model performance is evaluated based on the return / risk profile of the positions taken by the model.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (timestamp[ns, tz=America/New_York]): A timestamp indicating the date of the recorded data. The timestamps are in the America/New_York time zone.
- open (float64): A floating-point number representing the opening price of the stock on the given date.
- high (float64): A floating-point number representing the highest price of the stock on the given date.
- low (float64): A floating-point number representing the lowest price of the stock on the given date.
- close (float64): A floating-point number representing the closing price of the stock on the given date.
- volume (int64): An integer indicating the trading volume (number of shares) of the stock on the given date.
- dividends (float64): A floating-point number representing the dividends paid by the stock on the given date.
- stock_splits (float64): A floating-point number representing any stock splits that occurred on the given date.
### Data Splits
A single split, called train.
## Dataset Creation
### Curation Rationale
The timeseries-daily-sp500 dataset was developed to support the development of low-frequency trading algorithms.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from the web, and aggregated.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The timeseries-daily-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The timeseries-daily-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, timeseries-daily-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
false |
# Dataset Card for "lmqg/qag_jaquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the JAQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Japanese (ja)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""Nerdilinga"は898年にカロリング朝の王領として初めて文献に記録されている。レーゲンスブルク司教の統治下でネルトリンゲンは市場町に成長していった。1215年にネルトリンゲンは皇帝フリードリヒ2世から都市権を与えられ、帝国自由都市となった。この年に最初の市壁が築かれた。その縄張りは現在も街の地図に見て取れる。1219年、ネルトリンゲンの聖霊降臨祭についての最も古い文献上の記録が遺されている。重要な交易路が交差するこの都市は穀物、家畜、織物、毛皮、金属製品の主要な集散地に発展していった。ネルトリンゲンはフランクフルトと並ぶドイツで最も重要な遠距離交易都市の一つとなったのである。",
"questions": [ "1215年にネルトリンゲンは誰から都市権を与えられ、帝国自由都市となったか。", "\"Nerdilinga\"の最初の記録は何年のものですか。" ],
"answers": [ "皇帝フリードリヒ2世", "898年" ],
"questions_answers": "question: 1215年にネルトリンゲンは誰から都市権を与えられ、帝国自由都市となったか。, answer: 皇帝フリードリヒ2世 | question: "Nerdilinga"の最初の記録は何年のものですか。, answer: 898年"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9508| 1431 | 3050|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
false | # Dataset Card for "rico_refexp_combined"
This dataset combines the crowdsourced RICO RefExp prompts from the [UIBert dataset](https://huggingface.co/datasets/ivelin/rico_sca_refexp_synthetic) and the synthetically generated prompts from the [seq2act dataset](https://huggingface.co/datasets/ivelin/rico_sca_refexp_synthetic). |
false | |
false | # Dataset Card for "earnings-forecast-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The earnings-forecast-sp500 dataset provides information about the earnings forecast for the S&P 500 index constituents. The dataset includes features that detail each company's fiscal end, the consensus earnings per share (EPS) forecast, the high and low EPS forecasts, the number of estimates, and the number of upward and downward revisions.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string indicating the date of the forecast.
- id (int64): An integer representing the unique identifier for the forecast.
- fiscal_end (string): A string indicating the fiscal end date for the forecast.
- consensus_eps_forecast (float64): A floating-point number representing the consensus earnings per share forecast.
- high_eps_forecast (float64): A floating-point number representing the highest earnings per share forecast.
- low_eps_forecast (float64): A floating-point number representing the lowest earnings per share forecast.
- no_of_estimates (int64): An integer representing the number of estimates contributing to the consensus forecast.
- up (int64): An integer representing the number of upward revisions to the forecast.
- down (int64): An integer representing the number of downward revisions to the forecast.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The earnings-forecast-sp500 dataset was developed to support the development of high-frequency trading algorithms and investment strategies that rely on earnings forecasts.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from financial data providers and normalized for consistency.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The earnings-forecast-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The earnings-forecast-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, earnings-forecast-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
false |
# Dataset Card for solar3
### Dataset Summary
Šolar* is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools
(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade.
Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the
document available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian).
\(*) pronounce "š" as "sh" in "shoe".
By default the dataset is provided at **sentence-level** (125867 instances): each instance contains a source (the original) and a target (the corrected) sentence. Note that either the source or the target sentence in an instance may be missing - this usually happens when a source sentence is marked as redundant or when a new sentence is added by the teacher. Additionally, a source or a target sentence may appear in multiple instances - for example, this happens when one sentence gets divided into multiple sentences.
There is also an option to aggregate the instances at the **document-level** or **paragraph-level**
by explicitly providing the correct config:
```
datasets.load_dataset("cjvt/solar3", "paragraph_level")`
datasets.load_dataset("cjvt/solar3", "document_level")`
```
### Supported Tasks and Leaderboards
Error correction, e.g., at token/sequence level, as token/sequence classification or text2text generation.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```json
{
'id_doc': 'solar1',
'doc_title': 'KUS-G-slo-1-GO-E-2009-10001',
'is_manually_validated': True,
'src_tokens': ['”', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', '”', ',', 'izreče', 'Antigona', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'],
'src_ling_annotations': {
# truncated for conciseness
'lemma': ['”', 'ne', 'da', 'sovražiti', ...],
'ana': ['mte:U', 'mte:L', 'mte:Vd', ...],
'msd': ['UPosTag=PUNCT', 'UPosTag=PART|Polarity=Neg', 'UPosTag=SCONJ', ...],
'ne_tag': [..., 'O', 'B-PER', 'O', ...],
'space_after': [False, True, True, False, ...]
},
'tgt_tokens': ['„', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', ',', '”', 'izreče', 'Antigona', 'sebi', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'],
# omitted for conciseness, the format is the same as in 'src_ling_annotations'
'tgt_ling_annotations': {...},
'corrections': [
{'idx_src': [0], 'idx_tgt': [0], 'corr_types': ['Z/LOČ/nerazvrščeno']},
{'idx_src': [10, 11], 'idx_tgt': [10, 11], 'corr_types': ['Z/LOČ/nerazvrščeno']},
{'idx_src': [], 'idx_tgt': [14], 'corr_types': ['O/KAT/povratnost']}
]
}
```
The instance represents a correction in the document 'solar1' (`id_doc`), which were manually assigned/validated (`is_manually_validated`). More concretely, the source sentence contains three errors (as indicated by three elements in `corrections`):
- a punctuation change: '”' -> '„';
- a punctuation change: ['”', ','] -> [',', '”'] (i.e. comma inside the quote, not outside);
- addition of a new word: 'sebi'.
### Data Fields
- `id_doc`: a string containing the identifying name of the document in which the sentence appears;
- `doc_title`: a string containing the assigned document title;
- `is_manually_validated`: a bool indicating whether the document in which the sentence appears was reviewed by a teacher;
- `src_tokens`: words in the source sentence (`[]` if there is no source sentence);
- `src_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the source tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token;
- `tgt_tokens`: words in the target sentence (`[]` if there is no target sentence);
- `tgt_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the target tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token;
- `corrections`: a list of the corrections, with each correction represented with a dictionary, containing the indices of the source tokens involved (`idx_src`), target tokens involved (`idx_tgt`), and the categories of the corrections made (`corr_types`). Please note that there can be multiple assigned categories for one annotated correction, in which case `len(corr_types) > 1`.
## Dataset Creation
The Developmental corpus Šolar consists of 5,485 texts written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. The information on school (elementary or secondary), subject, level (grade or year), type of text, region, and date of production is provided for each text. School essays form the majority of the corpus while other material includes texts created during lessons, such as text recapitulations or descriptions, examples of formal applications, etc.
Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the attached document (in Slovenian). Teacher corrections were part of the original files and reflect real classroom situations of essay marking. Corrections were then inserted into texts by annotators and subsequently categorized. Due to the annotations being gathered in a practical (i.e. classroom) setting, only the most relevant errors may sometimes be annotated, e.g., not all incorrectly placed commas are annotated if there is a bigger issue in the text.
## Additional Information
### Dataset Curators
Špela Arhar Holdt; et al. (please see http://hdl.handle.net/11356/1589 for the full list)
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{solar3,
title = {Developmental corpus {\v S}olar 3.0},
author = {Arhar Holdt, {\v S}pela and Rozman, Tadeja and Stritar Ku{\v c}uk, Mojca and Krek, Simon and Krap{\v s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\v c}, Polona and Laskowski, Cyprian and Kocjan{\v c}i{\v c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},
url = {http://hdl.handle.net/11356/1589},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
false | # Breast cancer
The [Breast cancer dataset](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Original%29) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Classify cancerousness of the given cell.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| cancer | Binary classification | Is the cell clump cancerous? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/breast", "cancer")["train"]
```
# Features
| **Name** |**Type**|**Description** |
|-------------------------------|--------|----------------------------|
|`clump_thickness` |`int8` |Thickness of the clump |
|`uniformity_of_cell_size` |`int8` |Uniformity of cell size |
|`uniformity_of_cell_shape` |`int8` |Uniformity of cell shape |
|`marginal_adhesion` |`int8` |Marginal adhesion |
|`single_epithelial_cell_size` |`int8` |single_epithelial_cell_size |
|`bare_nuclei` |`int8` |bare_nuclei |
|`bland_chromatin` |`int8` |bland_chromatin |
|`normal_nucleoli` |`int8` |normal_nucleoli |
|`mitoses` |`int8` |mitoses |
|**is_cancer** |`int8` |Is the clump cancer | |
true |
# Dataset Card for tcfd_recommendations
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for classifying the TCFD recommendation categories ([fsb-tcfd.org](https://www.fsb-tcfd.org)) of paragraphs in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a multiclass classification task of paragraphs into the four TCFD recommendation categories (governance, strategy, risk management, metrics and targets) and the non-climate-related class.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 1
}
```
### Data Fields
- text: a paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> none (i.e., not climate-related), 1 -> metrics, 2 -> strategy, 3 -> risk, 4 -> governance)
### Data Splits
The dataset is split into:
- train: 1,300
- test: 400
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. |
false |
# Dataset Card for MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/JoelNiklaus/LegalDatasets/tree/main/pretrain/mc4_legal)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
This dataset contains large text resources (~133GB in total) from mc4 filtered for legal data that can be used for pretraining language models.
Use the dataset like this:
```python
from datasets import load_dataset
dataset = load_dataset("joelito/mc4_legal", "de", split='train', streaming=True)
```
### Supported Tasks and Leaderboards
The dataset supports the task of masked language modeling.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
| Source | Size (MB) | Words | Documents | Words/Document |
|:---------|------------:|------------:|------------:|-----------------:|
| all | 448980 | 28599300521 | 9873288 | 2896 |
| bg | 57 | 2390349 | 379 | 6306 |
| cs | 31005 | 1840827375 | 677796 | 2715 |
| da | 162 | 10466716 | 3231 | 3239 |
| de | 105739 | 6184578784 | 3164461 | 1954 |
| el | 30 | 1155977 | 307 | 3765 |
| en | 13734 | 966539309 | 359283 | 2690 |
| es | 132053 | 9058939804 | 2281888 | 3969 |
| et | 2059 | 110198368 | 49987 | 2204 |
| fi | 1270 | 62799074 | 44875 | 1399 |
| fr | 30878 | 2117306229 | 598983 | 3534 |
| ga | 1 | 32772 | 8 | 4096 |
| hu | 4677 | 244911748 | 58857 | 4161 |
| it | 46957 | 3053920779 | 990823 | 3082 |
| lt | 156 | 9142223 | 1529 | 5979 |
| lv | 1 | 58702 | 16 | 3668 |
| mt | 65 | 3479869 | 731 | 4760 |
| nl | 326 | 21962633 | 6875 | 3194 |
| pl | 37950 | 2235839721 | 827641 | 2701 |
| pt | 20120 | 1338147828 | 382173 | 3501 |
| ro | 8816 | 551372510 | 136513 | 4038 |
| sk | 5850 | 349265172 | 130701 | 2672 |
| sl | 1742 | 107493024 | 32574 | 3299 |
| sv | 5332 | 328471555 | 123657 | 2656 |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
The dataset was created by filtering mc4 for legal data.
We used terms indicating legal citations to get the texts.
Note that this dataset can be quite noisy, and the quality is not known.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
|
true |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
Natural Language Inference
Text Classification
### Languages
en
## Dataset Structure
### Data Instances
### Data Fields
premise:
hypothesis:
label:
### Data Splits
Evaluation: 258 samples
## Dataset Creation
### Curation Rationale
Extracting samples corresponding to different linguistics constructions of negation.
### Source Data
Geoffrey K. Pullum and Rodney Huddleston. 2002. Negation, chapter 9. Cambridge University Press.
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The annotators are the authors of the papers, one of whom holds a graduate degree in linguistics.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@joey234](https://github.com/joey234) for adding this dataset. |
false | # Dataset Card for "Calc-gsm8k"
## Summary
This dataset is an instance of gsm8k dataset, converted to a simple html-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer of the mathematical problem (a number)
## Supported Tasks
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can out-source the computations in the reasoning chain to a calculator.
## Construction Process
The answers in the original dataset was in in a structured but non-standard format. So, the answers were parsed, all arithmetical expressions
were evaluated using a sympy-based calculator, the outputs were checked to be consistent with the intermediate results and finally exported
into a simple html-like language that BeautifulSoup can parse.
## Content and Data splits
Content and splits correspond to the original gsm8k dataset.
See [gsm8k HF dataset](https://huggingface.co/datasets/gsm8k) and [official repository](https://github.com/openai/grade-school-math) for more info.
## Licence
MIT, consistently with the original dataset.
## Cite
If you use this version of dataset in research, please cite the [original GSM8K paper](https://arxiv.org/abs/2110.14168) and our report as follows:
```bibtex
@article{kadlcik2023calcx,
title={Calc-X: Enriching Arithmetical Chain-of-Thoughts Datasets by Interaction with Symbolic Systems},
author={Marek Kadlčík and Michal Štefánik},
year={2023},
eprint={2305.15017},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
true |
# Dataset Card for Indonesian Google Play Review
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Scrapped from e-commerce app on Google Play.
### Supported Tasks and Leaderboards
Sentiment Analysis
### Languages
Indonesian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. |
true | # Dataset Card for openlegaldata.io bulk case data
This is the copy of the lastest dump from [openlegaldata.io](https://de.openlegaldata.io/). I will try to keep this updated, since there is no offical Huggingface Dataset Repo.
## Dataset Description
- **Homepage:** [https://de.openlegaldata.io/](https://de.openlegaldata.io/)
- **Repository:** [Bulk Data](https://static.openlegaldata.io/dumps/de/)
### Dataset Summary
This is the openlegaldata bulk case download from October 2022. Please refer to the offical website (above) for any more information. I have not made any changes for it, since I use a different datasets to for projects.
### Languages
- German
## Additional Information
### Licensing/Citation Information
The [openlegaldata platform](https://github.com/openlegaldata/oldp) is licensed under the MIT license, you can access the dataset by citing the original source, [openlegaldata.io](https://de.openlegaldata.io/) |
false |
# Dataset Card for QA2D
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://worksheets.codalab.org/worksheets/0xd4ebc52cebb84130a07cbfe81597aaf0/
- **Repository:** https://github.com/kelvinguu/qanli
- **Paper:** https://arxiv.org/abs/1809.02922
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Existing datasets for natural language inference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of large-scale question answering datasets. Our approach hinges on learning a sentence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be successfully applied to a variety of other QA resources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenomena rarely seen in previous NLI datasets.
This Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
en
## Dataset Structure
### Data Instances
See below.
### Data Fields
- `dataset`: lowercased name of dataset (movieqa, newsqa, qamr, race, squad)
- `example_uid`: unique id of example within dataset (there are examples with the same uids from different datasets, so the combination of dataset + example_uid should be used for unique indexing)
- `question`: tokenized (space-separated) question from the source QA dataset
- `answer`: tokenized (space-separated) answer span from the source QA dataset
- `turker_answer`: tokenized (space-separated) answer sentence collected from MTurk
- `rule-based`: tokenized (space-separated) answer sentence, generated by the rule-based model
### Data Splits
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 60,710 |
| Dev | 10,344 |
## Dataset Creation
### Curation Rationale
This Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
@article{DBLP:journals/corr/abs-1809-02922,
author = {Dorottya Demszky and
Kelvin Guu and
Percy Liang},
title = {Transforming Question Answering Datasets Into Natural Language Inference
Datasets},
journal = {CoRR},
volume = {abs/1809.02922},
year = {2018},
url = {http://arxiv.org/abs/1809.02922},
eprinttype = {arXiv},
eprint = {1809.02922},
timestamp = {Fri, 05 Oct 2018 11:34:52 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1809-02922.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} |
false | # Dataset Card for SAMSum Corpus
## Dataset Description
### Links
- **Homepage:** hhttps://arxiv.org/abs/1911.12237v2
- **Repository:** https://arxiv.org/abs/1911.12237v2
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Languages
English
## Dataset Structure
### Data Instances
SAMSum dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 14732
- val: 818
- test: 819
## Dataset Creation
### Curation Rationale
In paper:
In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
### Annotation process
In paper:
Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary.
## Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
## Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
## Contributions |
true |
# Dataset Card for SPICED
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.copenlu.com/publication/2022_emnlp_wright/
- **Repository:** https://github.com/copenlu/scientific-information-change
- **Paper:**
### Dataset Summary
The Scientific Paraphrase and Information ChangE Dataset (SPICED) is a dataset of paired scientific findings from scientific papers, news media, and Twitter. The types of pairs are between <paper, news> and <paper, tweet>. Each pair is labeled for the degree of information similarity in the _findings_ described by each sentence, on a scale from 1-5. This is called the _Information Matching Score (IMS)_. The data was curated from S2ORC and matched news articles and Tweets using Altmetric. Instances are annotated by experts using the Prolific platform and Potato. Please use the following citation when using this dataset:
```
@article{modeling-information-change,
title={{Modeling Information Change in Science Communication with Semantically Matched Paraphrases}},
author={Wright, Dustin and Pei, Jiaxin and Jurgens, David and Augenstein, Isabelle},
year={2022},
booktitle = {Proceedings of EMNLP},
publisher = {Association for Computational Linguistics},
year = 2022
}
```
### Supported Tasks and Leaderboards
The task is to predict the IMS between two scientific sentences, which is a scalar between 1 and 5. Preferred metrics are mean-squared error and Pearson correlation.
### Languages
English
## Dataset Structure
### Data Fields
- DOI: The DOI of the original scientific article
- instance\_id: Unique instance ID for the sample. The ID contains the field, whether or not it is a tweet, and whether or not the sample was manually labeled or automatically using SBERT (marked as "easy")
- News Finding: Text of the news or tweet finding
- Paper Finding: Text of the paper finding
- News Context: For news instances, the surrounding two sentences for the news finding. For tweets, a copy of the tweet
- Paper Context: The surrounding two sentences for the paper finding
- scores: Annotator scores after removing low competence annotators
- field: The academic field of the paper ('Computer\_Science', 'Medicine', 'Biology', or 'Psychology')
- split: The dataset split ('train', 'val', or 'test')
- final\_score: The IMS of the instance
- source: Either "news" or "tweet"
- News Url: A URL to the source article if a news instance or the tweet ID of a tweet
### Data Splits
- train: 4721 instances
- validation: 664 instances
- test: 640 instances
## Dataset Creation
For the full details of how the dataset was created, please refer to our [EMNLP 2022 paper]().
### Curation Rationale
Science communication is a complex process of translation from highly technical scientific language to common language that lay people can understand. At the same time, the general public relies on good science communication in order to inform critical decisions about their health and behavior. SPICED was curated in order to provide a training dataset and benchmark for machine learning models to measure changes in scientific information at different stages of the science communication pipeline.
### Source Data
#### Initial Data Collection and Normalization
Scientific text: S2ORC
News articles and Tweets are collected through Altmetric.
#### Who are the source language producers?
Scientists, journalists, and Twitter users.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Models trained on SPICED can be used to perform large scale analyses of science communication. They can be used to match the same finding discussed in different media, and reveal trends in differences in reporting at different stages of the science communication pipeline. It is hoped that this can help to build tools which will improve science communication.
### Discussion of Biases
The dataset is restricted to computer science, medicine, biology, and psychology, which may introduce some bias in the topics which models will perform well on.
### Other Known Limitations
While some context is available, we do not release the full text of news articles and scientific papers, which may contain further context to help with learning the task. We do however provide the paper DOIs and links to the original news articles in case full text is desired.
## Additional Information
### Dataset Curators
Dustin Wright, Jiaxin Pei, David Jurgens, and Isabelle Augenstein
### Licensing Information
MIT
### Contributions
Thanks to [@dwright37](https://github.com/dwright37) for adding this dataset. |
false | ERROR: type should be string, got "\nhttps://github.com/allenai/csqa2\n\n```\n@article{talmor2022commonsenseqa,\n title={CommonsenseQA 2.0: Exposing the limits of AI through gamification},\n author={Talmor, Alon and Yoran, Ori and Bras, Ronan Le and Bhagavatula, Chandra and Goldberg, Yoav and Choi, Yejin and Berant, Jonathan},\n journal={arXiv preprint arXiv:2201.05320},\n year={2022}\n}\n```" |
false | # Dataset Card for "earnings-surprise-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The earnings-surprise-sp500 dataset provides information about the earnings surprises of the S&P 500 index constituents. An earnings surprise occurs when a company's reported quarterly or annual profits are above or below analysts' expectations.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string indicating the date of the earnings release.
- id (int64): A unique identifier for each earnings surprise event.
- fiscal_qtr_end (string): A string indicating the end of the fiscal quarter for which earnings are reported.
- date_reported (timestamp[ns]): The exact timestamp when the earnings were reported.
- eps (float64): The Earnings Per Share (EPS) reported by the company for the period.
- consensus_forecast (string): The consensus forecast for the EPS among analysts before the earnings were released.
- percentage_surprise (string): The percentage by which the reported EPS differed from the consensus forecast, representing the earnings surprise.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The earnings-surprise-sp500 dataset was curated to facilitate the study of earnings surprises and their impact on stock prices.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from various financial data providers and normalized for this dataset.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The earnings-surprise-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The earnings-surprise-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, earnings-surprise-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
true |
# Dataset Card for predicting-brazilian-court-decisions
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/lagefreitas/predicting-brazilian-court-decisions
- **Paper:** Lage-Freitas, A., Allende-Cid, H., Santana, O., & Oliveira-Lage, L. (2022). Predicting Brazilian Court
Decisions. PeerJ. Computer Science, 8, e904–e904. https://doi.org/10.7717/peerj-cs.904
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
The dataset is a collection of 4043 *Ementa* (summary) court decisions and their metadata from
the *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil). The court decisions are labeled
according to 7 categories and whether the decisions were unanimous on the part of the judges or not. The dataset
supports the task of Legal Judgment Prediction.
### Supported Tasks and Leaderboards
Legal Judgment Prediction
### Languages
Brazilian Portuguese
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present (train, validation and test) for each configuration.
### Data Fields
The dataset contains the following fields:
- `process_number`: A number assigned to the decision by the court
- `orgao_julgador`: Judging Body: one of '1ª Câmara Cível', '2ª Câmara Cível', '3ª Câmara Cível', 'Câmara Criminal', '
Tribunal Pleno', 'Seção Especializada Cível'
- `publish_date`: The date, when the decision has been published (14/12/2018 - 03/04/2019). At that time (in 2018-2019),
the scraping script was limited and not configurable to get data based on date range. Therefore, only the data from
the last months has been scraped.
- `judge_relator`: Judicial panel
- `ementa_text`: Summary of the court decision
- `decision_description`: **Suggested input**. Corresponds to ementa_text - judgment_text - unanimity_text. Basic
statistics (number of words): mean: 119, median: 88, min: 12, max: 1400
- `judgment_text`: The text used for determining the judgment label
- `judgment_label`: **Primary suggested label**. Labels that can be used to train a model for judgment prediction:
- `no`: The appeal was denied
- `partial`: For partially favourable decisions
- `yes`: For fully favourable decisions
- removed labels (present in the original dataset):
- `conflito-competencia`: Meta-decision. For example, a decision just to tell that Court A should rule this case
and not Court B.
- `not-cognized`: The appeal was not accepted to be judged by the court
- `prejudicada`: The case could not be judged for any impediment such as the appealer died or gave up on the
case for instance.
- `unanimity_text`: Portuguese text to describe whether the decision was unanimous or not.
- `unanimity_label`: **Secondary suggested label**. Unified labels to describe whether the decision was unanimous or
not (in some cases contains ```not_determined```); they can be used for model training as well (Lage-Freitas et al.,
2019).
### Data Splits
The data has been split randomly into 80% train (3234), 10% validation (404), 10% test (405).
There are two tasks possible for this dataset.
#### Judgment
Label Distribution
| judgment | train | validation | test |
|:----------|---------:|-----------:|--------:|
| no | 1960 | 221 | 234 |
| partial | 677 | 96 | 93 |
| yes | 597 | 87 | 78 |
| **total** | **3234** | **404** | **405** |
#### Unanimity
In this configuration, all cases that have `not_determined` as `unanimity_label` can be removed.
Label Distribution
| unanimity_label | train | validation | test |
|:-----------------|----------:|---------------:|---------:|
| not_determined | 1519 | 193 | 201 |
| unanimity | 1681 | 205 | 200 |
| not-unanimity | 34 | 6 | 4 |
| **total** | **3234** | **404** | **405** |
## Dataset Creation
### Curation Rationale
This dataset was created to further the research on developing models for predicting Brazilian court decisions that are
also able to predict whether the decision will be unanimous.
### Source Data
The data was scraped from *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil).
#### Initial Data Collection and Normalization
*“We developed a Web scraper for collecting data from Brazilian courts. The scraper first searched for the URL that
contains the list of court cases […]. Then, the scraper extracted from these HTML files the specific case URLs and
downloaded their data […]. Next, it extracted the metadata and the contents of legal cases and stored them in a CSV file
format […].”* (Lage-Freitas et al., 2022)
#### Who are the source language producers?
The source language producer are presumably attorneys, judges, and other legal professionals.
### Annotations
#### Annotation process
The dataset was not annotated.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The court decisions might contain sensitive information about individuals.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton
Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset
consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the
dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,
differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to
have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the
original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to
the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
## Additional Information
Lage-Freitas, A., Allende-Cid, H., Santana Jr, O., & Oliveira-Lage, L. (2019). Predicting Brazilian court decisions:
- "In Brazil [...] lower court judges decisions might be appealed to Brazilian courts (*Tribiunais de Justiça*) to be
reviewed by second instance court judges. In an appellate court, judges decide together upon a case and their
decisions are compiled in Agreement reports named *Acóordãos*."
### Dataset Curators
The names of the original dataset curators and creators can be found in references given below, in the section *Citation
Information*. Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch)
; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch)
; [Github](https://github.com/kapllan)).
### Licensing Information
No licensing information was provided for this dataset. However, please make sure that you use the dataset according to
Brazilian law.
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.1905.10348,
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and de Oliveira-Lage, L{\'{i}}via},
doi = {10.48550/ARXIV.1905.10348},
keywords = {Computation and Language (cs.CL),FOS: Computer and information sciences,Social and Information Networks (cs.SI)},
publisher = {arXiv},
title = {{Predicting Brazilian court decisions}},
url = {https://arxiv.org/abs/1905.10348},
year = {2019}
}
```
```
@article{Lage-Freitas2022,
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and Oliveira-Lage, L{\'{i}}via},
doi = {10.7717/peerj-cs.904},
issn = {2376-5992},
journal = {PeerJ. Computer science},
keywords = {Artificial intelligence,Jurimetrics,Law,Legal,Legal NLP,Legal informatics,Legal outcome forecast,Litigation prediction,Machine learning,NLP,Portuguese,Predictive algorithms,judgement prediction},
language = {eng},
month = {mar},
pages = {e904--e904},
publisher = {PeerJ Inc.},
title = {{Predicting Brazilian Court Decisions}},
url = {https://pubmed.ncbi.nlm.nih.gov/35494851 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044329/},
volume = {8},
year = {2022}
}
```
### Contributions
Thanks to [@kapllan](https://github.com/kapllan) and [@joelniklaus](https://github.com/joelniklaus) for adding this
dataset.
|
false |
# Dataset Card for `cranfield`
The `cranfield` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/cranfield#cranfield).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,400
- `queries` (i.e., topics); count=225
- `qrels`: (relevance assessments); count=1,837
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/cranfield', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ..., 'author': ..., 'bib': ...}
queries = load_dataset('irds/cranfield', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/cranfield', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false | |
false |
# KP20k Benchmark Dataset for Keyphrase Generation
## About
KP20k is a dataset for benchmarking keyphrase extraction and generation models.
The data is composed of 570 809 abstracts and their associated titles from scientific articles.
Details about the dataset can be found in the original paper:
- Meng et al 2017.
[Deep keyphrase Generation](https://aclanthology.org/P17-1054.pdf)
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 582–592
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper:
- Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text.
## Content
The dataset is divided into the following three splits:
| Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----------: | --------: | ----------: | ------: | -------: |
| Train | 530 809 | 5.29 | 58.19 | 10.93 | 17.36 | 13.52 |
| Test | 20 000 | 5.28 | 58.40 | 10.84 | 17.20 | 13.56 |
| Validation | 20 000 | 5.27 | 58.20 | 10.94 | 17.26 | 13.61 |
The following data fields are available:
- **id**: unique identifier of the document. **NB** There were no ids in the original dataset. The ids were generated using the python module shortuuid (https://pypi.org/project/shortuuid/)
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
**NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + abstract). |
false |
# Dataset Card for GTSRB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** http://www.sciencedirect.com/science/article/pii/S0893608012000457
- **Repository:** https://github.com/bazylhorsey/gtsrb/
- **Paper:** Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition
- **Leaderboard:** https://benchmark.ini.rub.de/gtsrb_results.html
- **Point of Contact:** bhorsey16@gmail.com
### Dataset Summary
The German Traffic Sign Benchmark is a multi-class, single-image classification challenge held at the International Joint Conference on Neural Networks (IJCNN) 2011. We cordially invite researchers from relevant fields to participate: The competition is designed to allow for participation without special domain knowledge. Our benchmark has the following properties:
- Single-image, multi-class classification problem
- More than 40 classes
- More than 50,000 images in total
- Large, lifelike database
### Supported Tasks and Leaderboards
[Kaggle](https://www.kaggle.com/datasets/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign) \
[Original](https://benchmark.ini.rub.de/gtsrb_results.html)
## Dataset Structure
### Data Instances
```
{
"Width": 31,
"Height": 31,
"Roi.X1": 6,
"Roi.Y1": 6,
"Roi.X2": 26,
"Roi.Y2": 26,
"ClassId": 20,
"Path": "Train/20/00020_00004_00002.png",
}
```
### Data Fields
- Width: width of image
- Height: Height of image
- Roi.X1: Upper left X coordinate
- Roi.Y1: Upper left Y coordinate
- Roi.X2: Lower right t X coordinate
- Roi.Y2: Lower right Y coordinate
- ClassId: Class of image
- Path: Path of image
### Data Splits
Categories: 42
Train: 39209
Test: 12630
## Dataset Creation
### Curation Rationale
Recognition of traffic signs is a challenging real-world problem of high industrial relevance. Although commercial systems have reached the market and several studies on this topic have been published, systematic unbiased comparisons of different approaches are missing and comprehensive benchmark datasets are not freely available.
Traffic sign recognition is a multi-class classification problem with unbalanced class frequencies. Traffic signs can provide a wide range of variations between classes in terms of color, shape, and the presence of pictograms or text. However, there exist subsets of classes (e. g., speed limit signs) that are very similar to each other.
The classifier has to cope with large variations in visual appearances due to illumination changes, partial occlusions, rotations, weather conditions, etc.
Humans are capable of recognizing the large variety of existing road signs with close to 100% correctness. This does not only apply to real-world driving, which provides both context and multiple views of a single traffic sign, but also to the recognition from single images.
<!-- ### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] -->
|
false |
# Dataset Card for "yerevann/coco-karpathy"
The Karpathy split of COCO for image captioning.
|
false | # Dataset Card for Project Gutenber - English Language eBooks
A collection of non-english language eBooks (48284 rows, 80%+ of all english language books available on the site) from the Project Gutenberg site with metadata removed.
Originally colected for https://github.com/LAION-AI/Open-Assistant (follows the OpenAssistant training format)
The METADATA column contains catalogue meta information on each book as a serialized JSON:
| key | original column |
|----|----|
| language | - |
| text_id | Text# unique book identifier on Prject Gutenberg as *int* |
| title | Title of the book as *string* |
| issued | Issued date as *string* |
| authors | Authors as *string*, comma separated sometimes with dates |
| subjects | Subjects as *string*, various formats |
| locc | LoCC code as *string* |
| bookshelves | Bookshelves as *string*, optional |
## Source data
**How was the data generated?**
- A crawler (see Open-Assistant repository) downloaded the raw HTML code for
each eBook based on **Text#** id in the Gutenberg catalogue (if available)
- The metadata and the body of text are not clearly separated so an additional
parser attempts to split them, then remove transcriber's notes and e-book
related information from the body of text (text clearly marked as copyrighted or
malformed was skipped and not collected)
- The body of cleaned TEXT as well as the catalogue METADATA is then saved as
a parquet file, with all columns being strings
**Copyright notice:**
- Some of the books are copyrighted! The crawler ignored all books
with an english copyright header by utilizing a regex expression, but make
sure to check out the metadata for each book manually to ensure they are okay
to use in your country! More information on copyright:
https://www.gutenberg.org/help/copyright.html and
https://www.gutenberg.org/policy/permission.html
- Project Gutenberg has the following requests when using books without
metadata: _Books obtianed from the Project Gutenberg site should have the
following legal note next to them: "This eBook is for the use of anyone
anywhere in the United States and most other parts of the world at no cost and
with almost" no restrictions whatsoever. You may copy it, give it away or
re-use it under the terms of the Project Gutenberg License included with this
eBook or online at www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country where you are located
before using this eBook."_ |
false |
# Coco dataset loader based on tensorflow dataset coco
## Object Detection
```python
import os
from datasets import load_dataset
from PIL import Image, ImageFont, ImageDraw, ImageColor
def calc_lum(rgb):
return (0.2126*rgb[0] + 0.7152*rgb[1] + 0.0722*rgb[2])
COLOR_MAP = [ImageColor.getrgb(code) for name, code in ImageColor.colormap.items()]
def get_text_bbox(bb, tbb, margin, im_w, im_h, anchor="leftBottom"):
m = margin
l, t, r, b = bb
tl, tt, tr, tb = tbb
bbw, bbh = r - l, b - t
tbbw, tbbh = tr - tl, tb - tt
# bbox (left-top)
if anchor == "leftTop":
ax, ay = l, t
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-bottom)
x1, y1 = max(ax, 0), max(ay - tb - 2*m, 0)
x2, y2 = min(x1 + tr + 2*m, im_w), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-top)
x1, y1 = max(ax, 0), max(ay, 0)
x2, y2 = min(x1 + tr + 2*m, im_w), min(y1 + tb + 2*m, im_h)
return (( x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "rightTop":
ax, ay = r, t
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-bottom)
x2, y1 = max(ax, 0), max(ay - tb - 2*m, 0)
x1, y2 = max(x2 - tr - 2*m, 0), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-top)
x2, y1 = max(ax, 0), max(ay, 0)
x1, y2 = max(x2 - tr - 2*m, 0), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "rightBottom":
ax, ay = r, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x2, y2 = min(ax, im_w), min(ay + tb + 2*m, im_h)
x1, y1 = max(x2 - tr - 2*m, 0), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x2, y2 = min(ax, im_w), max(ay, 0)
x1, y1 = max(x2 - tr - 2*m, 0), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "leftBottom":
ax, ay = l, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x1, y2 = min(ax, im_w), min(ay + tb + 2*m, im_h)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x1, y2 = min(ax, im_w), max(ay, 0)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "centerBottom":
ax, ay = (l+r)//2, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x1, y2 = min(ax - tr//2 - m, im_w), min(ay + tb + 2*m, im_h)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x1, y2 = min(ax - tr//2 - m, im_w), max(ay, 0)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
def draw_bbox(image, objects, out_path, label_names=None, font="Roboto-Bold.ttf", fontsize=15, fill=True, opacity=60, width=2, margin=3, anchor="leftBottom"):
fnt = ImageFont.truetype(font, fontsize)
im_w, im_h = image.size
img = image.convert("RGBA")
overlay = Image.new('RGBA', img.size, (0, 0, 0, 0))
draw = ImageDraw.Draw(overlay)
for bb, lbl_id in zip(objects["bbox"], objects["label"]):
c = COLOR_MAP[min(lbl_id, len(COLOR_MAP)-1)]
fill_c = c + (opacity, ) if fill else None
draw.rectangle((bb[0], bb[1], bb[2], bb[3]), outline=c, fill=fill_c, width=width)
text = ""
if label_names is not None:
text = label_names[lbl_id]
tbb = fnt.getbbox(text)
btn_bbox, text_pos = get_text_bbox(bb, tbb, margin, im_w, im_h, anchor)
fc = (0, 0, 0) if calc_lum(c) > 150 else (255, 255, 255)
draw.rectangle(btn_bbox, outline=c, fill=c + (255, ))
draw.text(text_pos, text, font=fnt, fill=fc + (255, ))
img = Image.alpha_composite(img, overlay)
overlay = Image.new('RGBA', img.size, (0, 0, 0, 0))
draw = ImageDraw.Draw(overlay)
img = img.convert("RGB")
img.save(out_path)
raw_datasets = load_dataset(
"coco.py",
"2017",
cache_dir="./huggingface_datasets",
)
train_dataset = raw_datasets["train"]
label_list = raw_datasets["train"].features["objects"].feature['label'].names
for idx, item in zip(range(10), train_dataset):
draw_bbox(item["image"], item["objects"], item["image/filename"], label_list)
```


## Panoptic segmentation
```python
import numpy as np
from datasets import load_dataset
from PIL import Image, ImageFont, ImageDraw, ImageColor
from transformers.image_transforms import (
rgb_to_id,
)
def calc_lum(rgb):
return (0.2126*rgb[0] + 0.7152*rgb[1] + 0.0722*rgb[2])
COLOR_MAP = [ImageColor.getrgb(code) for name, code in ImageColor.colormap.items()]
def get_text_bbox(bb, tbb, margin, im_w, im_h, anchor="leftBottom"):
m = margin
l, t, r, b = bb
tl, tt, tr, tb = tbb
bbw, bbh = r - l, b - t
tbbw, tbbh = tr - tl, tb - tt
# bbox (left-top)
if anchor == "leftTop":
ax, ay = l, t
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-bottom)
x1, y1 = max(ax, 0), max(ay - tb - 2*m, 0)
x2, y2 = min(x1 + tr + 2*m, im_w), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-top)
x1, y1 = max(ax, 0), max(ay, 0)
x2, y2 = min(x1 + tr + 2*m, im_w), min(y1 + tb + 2*m, im_h)
return (( x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "rightTop":
ax, ay = r, t
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-bottom)
x2, y1 = max(ax, 0), max(ay - tb - 2*m, 0)
x1, y2 = max(x2 - tr - 2*m, 0), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-top)
x2, y1 = max(ax, 0), max(ay, 0)
x1, y2 = max(x2 - tr - 2*m, 0), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "rightBottom":
ax, ay = r, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x2, y2 = min(ax, im_w), min(ay + tb + 2*m, im_h)
x1, y1 = max(x2 - tr - 2*m, 0), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x2, y2 = min(ax, im_w), max(ay, 0)
x1, y1 = max(x2 - tr - 2*m, 0), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "leftBottom":
ax, ay = l, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x1, y2 = min(ax, im_w), min(ay + tb + 2*m, im_h)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x1, y2 = min(ax, im_w), max(ay, 0)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "centerBottom":
ax, ay = (l+r)//2, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x1, y2 = min(ax - tr//2 - m, im_w), min(ay + tb + 2*m, im_h)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x1, y2 = min(ax - tr//2 - m, im_w), max(ay, 0)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
# Copied from transformers.models.detr.image_processing_detr.masks_to_boxes
def masks_to_boxes(masks: np.ndarray) -> np.ndarray:
"""
Compute the bounding boxes around the provided panoptic segmentation masks.
Args:
masks: masks in format `[number_masks, height, width]` where N is the number of masks
Returns:
boxes: bounding boxes in format `[number_masks, 4]` in xyxy format
"""
if masks.size == 0:
return np.zeros((0, 4))
h, w = masks.shape[-2:]
y = np.arange(0, h, dtype=np.float32)
x = np.arange(0, w, dtype=np.float32)
# see https://github.com/pytorch/pytorch/issues/50276
y, x = np.meshgrid(y, x, indexing="ij")
x_mask = masks * np.expand_dims(x, axis=0)
x_max = x_mask.reshape(x_mask.shape[0], -1).max(-1)
x = np.ma.array(x_mask, mask=~(np.array(masks, dtype=bool)))
x_min = x.filled(fill_value=1e8)
x_min = x_min.reshape(x_min.shape[0], -1).min(-1)
y_mask = masks * np.expand_dims(y, axis=0)
y_max = y_mask.reshape(x_mask.shape[0], -1).max(-1)
y = np.ma.array(y_mask, mask=~(np.array(masks, dtype=bool)))
y_min = y.filled(fill_value=1e8)
y_min = y_min.reshape(y_min.shape[0], -1).min(-1)
return np.stack([x_min, y_min, x_max, y_max], 1)
def draw_seg(image, panoptic_image, oids, labels, out_path, label_names=None, font="Roboto-Bold.ttf", fontsize=15, opacity=160, anchor="leftBottom"):
fnt = ImageFont.truetype(font, fontsize)
im_w, im_h = image.size
masks = np.asarray(panoptic_image, dtype=np.uint32)
masks = rgb_to_id(masks)
oids = np.array(oids, dtype=np.uint32)
masks = masks == oids[:, None, None]
masks = masks.astype(np.uint8)
bboxes = masks_to_boxes(masks)
img = image.convert("RGBA")
for label, mask, bbox in zip(labels, masks, bboxes):
c = COLOR_MAP[min(label, len(COLOR_MAP)-1)]
cf = np.array(c + (opacity, )).astype(np.uint8)
cmask = mask[:, :, None] * cf[None, None, :]
cmask = Image.fromarray(cmask)
img = Image.alpha_composite(img, cmask)
if label_names is not None:
text = label_names[label]
tbb = fnt.getbbox(text)
btn_bbox, text_pos = get_text_bbox(bbox, tbb, 3, im_w, im_h, anchor=anchor)
overlay = Image.new('RGBA', img.size, (0, 0, 0, 0))
draw = ImageDraw.Draw(overlay)
fc = (0, 0, 0) if calc_lum(c) > 150 else (255, 255, 255)
draw.rectangle(btn_bbox, outline=c, fill=c + (255, ))
draw.text(text_pos, text, font=fnt, fill=fc + (255, ))
img = Image.alpha_composite(img, overlay)
img = img.convert("RGB")
img.save(out_path)
raw_datasets = load_dataset(
"coco.py",
"2017_panoptic",
cache_dir="./huggingface_datasets",
# data_dir="./data",
)
train_dataset = raw_datasets["train"]
label_list = raw_datasets["train"].features["panoptic_objects"].feature['label'].names
for idx, item in zip(range(10), train_dataset):
draw_seg(
item["image"],
item["panoptic_image"],
item["panoptic_objects"]["id"],
item["panoptic_objects"]["label"],
"panoptic_" + item["image/filename"],
label_list)
```


|
false | # Dataset Card for "short-interest-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The short-interest-sp500 dataset provides short interest data for companies listed on the S&P 500 index. This includes the number of shares that have been sold short but have not yet been covered or closed out.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string representing the date when the data was collected.
- id (int64): A unique integer identifier for each data instance.
- settlement_date (timestamp[ns]): The date by which a buyer must pay for the securities delivered by the seller.
- interest (float64): A floating point number representing the short interest of the company on the specified date.
- avg_daily_share_volume (float64): A floating point number representing the average daily trading volume of the company.
- days_to_cover (float64): A floating point number representing the days to cover metric, which is the number of days volume worth of short interest.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The short-interest-sp500 dataset was created to facilitate the study of market dynamics, particularly the role of short selling.
### Source Data
#### Initial Data Collection and Normalization
The dataset was compiled from publicly available sources.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The short-interest-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The short-interest-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, short-interest-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
false |
# Dataset Card for "answerable-tydiqa"
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
- **Paper:** [Paper](https://aclanthology.org/2020.tacl-1.30/)
- **Size of downloaded dataset files:** 75.43 MB
- **Size of the generated dataset:** 131.78 MB
- **Total amount of disk used:** 207.21 MB
### Dataset Summary
[TyDi QA](https://huggingface.co/datasets/tydiqa) is a question answering dataset covering 11 typologically diverse languages.
Answerable TyDi QA is an extension of the GoldP subtask of the original TyDi QA dataset to also include unanswertable questions.
## Dataset Structure
The dataset contains a train and a validation set, with 116067 and 13325 examples, respectively. Access them with
```py
from datasets import load_dataset
dataset = load_dataset("copenlu/answerable_tydiqa")
train_set = dataset["train"]
validation_set = dataset["validation"]
```
### Data Instances
Here is an example of an instance of the dataset:
```
{'question_text': 'dimanakah Dr. Ernest François Eugène Douwes Dekker meninggal?',
'document_title': 'Ernest Douwes Dekker',
'language': 'indonesian',
'annotations':
{'answer_start': [45],
'answer_text': ['28 Agustus 1950']
},
'document_plaintext': 'Ernest Douwes Dekker wafat dini hari tanggal 28 Agustus 1950 (tertulis di batu nisannya; 29 Agustus 1950 versi van der Veur, 2006) dan dimakamkan di TMP Cikutra, Bandung.',
'document_url': 'https://id.wikipedia.org/wiki/Ernest%20Douwes%20Dekker'}
```
Description of the dataset columns:
| Column name | type | Description |
| ----------- | ----------- | ----------- |
| document_title | str | The title of the Wikipedia article from which the data instance was generated |
| document_url | str | The URL of said article |
| language | str | The language of the data instance |
| question_text | str | The question to answer |
| document_plaintext | str | The context, a Wikipedia paragraph that might or might not contain the answer to the question |
| annotations["answer_start"] | list[int] | The char index in 'document_plaintext' where the answer starts. If the question is unanswerable - [-1] |
| annotations["answer_text"] | list[str] | The answer, a span of text from 'document_plaintext'. If the question is unanswerable - [''] |
**Notice:** If the question is *answerable*, annotations["answer_start"] and annotations["answer_text"] contain a list of length 1
(In some variations of the dataset the lists might be longer, e.g. if more than one person annotated the instance, but not in our case).
If the question is *unanswerable*, annotations["answer_start"] will have "-1", while annotations["answer_text"] contain a list with an empty sring.
## Useful stuff
Check out the [datasets ducumentations](https://huggingface.co/docs/datasets/quickstart) to learn how to manipulate and use the dataset. Specifically, you might find the following functions useful:
`dataset.filter`, for filtering out data (useful for keeping instances of specific languages, for example).
`dataset.map`, for manipulating the dataset.
`dataset.to_pandas`, to convert the dataset into a pandas.DataFrame format.
```
@article{tydiqa,
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year = {2020},
journal = {Transactions of the Association for Computational Linguistics}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
false |
# Dataset Card for lener_br_text_to_lm
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The LeNER-Br language modeling dataset is a collection of legal texts
in Portuguese from the LeNER-Br dataset (https://cic.unb.br/~teodecampos/LeNER-Br/).
The legal texts were obtained from the original token classification Hugging Face
LeNER-Br dataset (https://huggingface.co/datasets/lener_br) and processed to create
a DatasetDict with train and validation dataset (20%).
The LeNER-Br language modeling dataset allows the finetuning of language models
as BERTimbau base and large.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
```
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 8316
})
test: Dataset({
features: ['text'],
num_rows: 2079
})
})
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
false |
# Dataset Card for "RO-Offense-Sequences"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
<!--
- **Paper:** News-RO-Offense - A Romanian Offensive Language Dataset and Baseline Models Centered on News Article Comments
-->
- **Homepage:** [https://github.com/readerbench/ro-offense-sequences](https://github.com/readerbench/ro-offense-sequences)
- **Repository:** [https://github.com/readerbench/ro-offense-sequences](https://github.com/readerbench/ro-offense-sequences)
- **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
-
### Dataset Summary
a novel Romanian language dataset for offensive language detection with manually
annotated offensive labels from a local Romanian sports news website (gsp.ro):
Resulting in 12,445 annotated messages
### Languages
Romanian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'id': 5,
'text':'PLACEHOLDER TEXT',
'label': 'OTHER'
}
```
### Data Fields
- `id`: The unique comment ID, corresponding to the ID in [RO Offense](https://huggingface.co/datasets/readerbench/ro-offense)
- `text`: full comment text
- `label`: the type of offensive message (OTHER, PROFANITY, INSULT, ABUSE)
### Data Splits
Train | Other | Profanity | Insult | Abuse
:---| :---| :---| :---| :---:
9953 | 3656 | 1293 | 2236 | 2768
Test | Other | Profanity | Insult | Abuse
:---| :---| :---| :---| :---:
2492 | 916 | 324 | 559 | 693
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification for Romanian Language.
For the labeling of texts we loosely base our definitions on the Germeval 2019 task for detecting offensive language in german tweets (Struß et al., 2019)
Data source: Comments on articles in Gazeta Sporturilor (gsp.ro) between 2011 and 2020
Selection for annotation: we select comments from a pool of secific articles based on the number of comments in the article.
The number of comments per article has the following distribution:
```
mean 183.820923
std 334.707177
min 1.000000
25% 20.000000
50% 58.000000
75% 179.000000
max 2151.000000
```
Based on this we select only comments from articles having between 20 and 50 comments. Also, we remove comments containing urls or three consecutive *, since these were mostly censored by editors or automatic profanity detection algorythms.
Additional, in order to have some meaningful messages for annotation, we select only messages with length between 50 and 500 characters.
### Source Data
Sports News Articles comments
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Sports News Article readers
### Annotations
- Andrei Paraschiv
- Irina Maria Sandu
#### Annotation process
##### OTHER
Label used for non offensive texts.
##### PROFANITY
This is the "lighter" form of abusive language. When profane words are used without a direct intend on offending a target, or without ascribing some negative qualities to a target we use this label. Some messages in this class may even have a positive sentiment and uses swearwords as emphasis. Messages containing profane words that are not directed towards a specific group or person, we label as **PROFANITY**
Also, self censored messages with swear words having some letters hidden, or some deceitful misspellings of swearwords that have clear intend on circumventing profanity detectors will be treated as **PROFANITY**.
##### INSULT
The message clearly wants to offend someone, ascribing negatively evaluated qualities or deficiences, labeling a person or a group of persons as unworthy or unvalued. Insults do imply disrespect and contempt directed towards a target.
##### ABUSE
This label marks messages containing the stronger form of offensive and abusive language. This type of language ascribes the target a social identity that is judged negatively by the majority of society, or at least is percieved as a mostly negative judged identity. Shameful, unworthy or morally unaceptable identytities fall in this category. In contrast to insults, instances of abusive language require that the target of judgment is seen as a representative of a group and it is ascribed negative qualities that are taken to be universal, omnipresent and unchangeable characteristics of the group.
In contrast to insults, instances of abusive language require that the target of judgment tis seen as a representative of a group and it is ascribed negative qualities that are taken to be universal, omnipresent and unchangeable characteristics of the group.
Additional, dehumanizing language targeting a person or group is also classified as ABUSE.
#### Who are the annotators?
Native speakers
### Personal and Sensitive Information
The data was public at the time of collection. PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
This data is available and distributed under Apache-2.0 license
### Citation Information
```
tbd
```
### Contributions |
false | # Post Operative
The [Seeds dataset](https://archive-beta.ics.uci.edu/dataset/236/seeds) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| seeds | Multiclass classification.| |
| seeds_0 | Binary classification. | Is the seed of class 0? |
| seeds_1 | Binary classification. | Is the seed of class 1? |
| seeds_2 | Binary classification. | Is the seed of class 2? | |
false | # Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
## Dataset Description
Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
Datasets are in JSON Lines format (each line is a json dictionary).
The datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the "B-" and "I-" convention from the [IOB2 syntax]("https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)")
Each entry consists of a dictionary with the following keys:
- `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
- `"tokens"`: the list of tokens (strings) that form the text of this sample. Must be included in the predictions.
- `"ner_tags"`: the list of NER tags (in IOB2 format)
The following keys are not strictly needed by the participants:
- `"ner_ids"`: the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
## Instructions for Workshop participants:
How to load the data using the Huggingface library:
```python
from datasets import load_dataset
dataset = load_dataset("adsabs/WIESP2022-NER")
```
How to load the data if you cloned the repository locally:
(assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
- python (as list of dictionaries):
```python
import json
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
wiesp_dev_json = [json.loads(l) for l in list(f)]
```
- into Huggingface (as a Huggingface Dataset):
```python
from datasets import Dataset
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
```
How to compute your scores on the training data:
1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` functions (from the `.py` files with the same names).
Requirement to run the scoring scripts:
[NumPy](https://numpy.org/install/)
[scikit-learn](https://scikit-learn.org/stable/install.html)
[seqeval](https://github.com/chakki-works/seqeval#installation)
To get scores on the validation data, zip your predictions file (a single `.jsonl' file formatted following the same instructions as above) and upload the `.zip` file to the [Codalabs](https://codalab.lisn.upsaclay.fr/competitions/5062) competition.
## File list
```
├── WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
├── WIESP2022-NER-DEV.jsonl : 20 samples for development.
├── WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
├── WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
├── WIESP2022-NER-VALIDATION.jsonl : 1366 samples for validation
├── WIESP2022-NER-TESTING-NO-LABELS.jsonl : 2505 samples for testing without the NER labels. Used for the WIESP2022 workshop.
├── WIESP2022-NER-TESTING.jsonl : 2505 samples for testing
├── README.MD : this file.
├── tag_definitions.md : short descriptions and examples of the tags used in the task.
└── scoring-scripts/ : scripts used to evaluate submissions.
├── compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
└── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
```
## Cite as
[Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature (DEAL)](https://aclanthology.org/2022.wiesp-1.1) (Grezes et al., WIESP 2022)
```python
@inproceedings{grezes-etal-2022-overview,
title = "Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature ({DEAL})",
author = "Grezes, Felix and
Blanco-Cuaresma, Sergi and
Allen, Thomas and
Ghosal, Tirthankar",
booktitle = "Proceedings of the first Workshop on Information Extraction from Scientific Publications",
month = "nov",
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wiesp-1.1",
pages = "1--7",
abstract = "In this article, we describe the overview of our shared task: Detecting Entities in the Astrophysics Literature (DEAL). The DEAL shared task was part of the Workshop on Information Extraction from Scientific Publications (WIESP) in AACL-IJCNLP 2022. Information extraction from scientific publications is critical in several downstream tasks such as identification of critical entities, article summarization, citation classification, etc. The motivation of this shared task was to develop a community-wide effort for entity extraction from astrophysics literature. Automated entity extraction would help to build knowledge bases, high-quality meta-data for indexing and search, and several other use-cases of interests. Thirty-three teams registered for DEAL, twelve of them participated in the system runs, and finally four teams submitted their system descriptions. We analyze their system and performance and finally discuss the findings of DEAL.",
}
``` |
true |
# Dataset Card for Social Media Abuse
<!-- ## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** -->
### Dataset Summary
This is a dataset to use with a **SetFit** model to generate more examples of online social media abuse. This replicates a dataset you would see from a moderation company, such as Crisp Thinking (https://www.crispthinking.com/) who have data trained on billions of various risk types on social media.
The dataset here is a mocked example of this to prove the efficacy of the SetFit model on a sentiment classification task.
### Supported Tasks and Leaderboards
This is to be used for *text-classification*.
### Languages
Contains only `en` abusive sentiments.
## Dataset Structures
### Data Fields
This contains 2 fields:
1. text
2. label
### Data Splits
This contains a single training split, as the volumes are low 8 positive examples and 8 negative examples.
<!-- ## Dataset Creation -->
<!-- ### Curation Rationale
[More Information Needed] -->
<!-- ### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed] -->
<!-- ### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed] -->
## Considerations for Using the Data
### Social Impact of Dataset
This dataset, or similar, could be used for social media moderation and risk detection purposes.
### Discussion of Biases
Biased to a fixed number of examples and have been dummy generated.
## Additional Information
### Dataset Curators
Gary Hutson
### Licensing Information
GNU V3 Public Licence.
### Citation Information
Hutson, G. (2023). *Social Media Abuse dataset*.
### Contributions
Thanks to [StatsGary](https://github.com/StatsGary) for adding this dataset. |
false |
Movie ner dataset
|
false |
# Dataset Card for ekar_english
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ekar-leaderboard.github.io
- **Paper:** [E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning](https://aclanthology.org/2022.findings-acl.311)
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1671/overview
- **Point of Contact:** jjchen19@fudan.edu.cn
### Dataset Summary
***New!***(9/18/2022) E-KAR `v1.1` is officially released (at the `main` branch), **with a higher-quality English dataset!** In `v1.1`, we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the `v1.0` branch in the repo. For more information please refer to https://ekar-leaderboard.github.io.
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
### Supported Tasks and Leaderboards
- `analogical-qa`: The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.
- `explanation-generation`: The dataset can be used to generate free-text explanations to rationalize analogical reasoning.
This dataset supports two task modes: EASY mode and HARD mode:
- `EASY mode`: where query explanation can be used as part of the input.
- `HARD mode`: no explanation is allowed as part of the input.
### Languages
This dataset is in English, which is translated from [its Chinese version](https://huggingface.co/datasets/Jiangjie/ekar_chinese/)
## Dataset Structure
### Data Instances
```json
{
"id": "982f17-en",
"question": "plant:coal",
"choices": {
"label": [
"A",
"B",
"C",
"D"
],
"text": [
"white wine:aged vinegar",
"starch:corn",
"milk:yogurt",
"pickled cabbage:cabbage"
]
},
"answerKey": "C",
"explanation": [
"\"plant\" is the raw material of \"coal\".",
"both \"white wine\" and \"aged vinegar\" are brewed.",
"\"starch\" is made of \"corn\", and the order of words is inconsistent with the query.",
"\"yogurt\" is made from \"milk\".",
"\"pickled cabbage\" is made of \"cabbage\", and the word order is inconsistent with the query."
],
"relation": [
[["plant", "coal", "R3.7"]],
[["white wine", "aged vinegar", "R2.4"]],
[["corn", "starch", "R3.7"]],
[["milk", "yogurt", "R3.7"]],
[["cabbage", "pickled cabbage", "R3.7"]]
]
}
```
### Data Fields
- id: a string identifier for each example.
- question: query terms.
- choices: candidate answer terms.
- answerKey: correct answer.
- explanation: explanations for query (1st) and candidate answers (2nd-5th).
- relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).
### Data Splits
| name |train|validation|test|
|:-----:|:---:|:--------:|:--:|
|default| 870| 119| 262|
|description| | | blinded |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.
### Discussion of Biases
This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, despite the effort that the authors try to remove or rewrite such problems, it may still contain information biased to Chinese culture.
### Other Known Limitations
1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.
2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.
3. The English version of E-KAR is machine-translated and post-edited by humans. Although the authors have tried their best to maintain the translation quality, there could be some unsatisfying samples in the English dataset, e.g., culture-specific ones, ambiguous ones after translation, etc.
## Additional Information
### Dataset Curators
The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.
### Licensing Information
[Needs More Information]
### Citation Information
```latex
@inproceedings{chen-etal-2022-e,
title = "{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning",
author = "Chen, Jiangjie and
Xu, Rui and
Fu, Ziquan and
Shi, Wei and
Li, Zhongqiao and
Zhang, Xinbo and
Sun, Changzhi and
Li, Lei and
Xiao, Yanghua and
Zhou, Hao",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.311",
pages = "3941--3955",
}
``` |
false |
# Dataset Card for "CrossSum"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/CrossSum](https://github.com/csebuetnlp/CrossSum)
- **Paper:** [CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs](https://arxiv.org/abs/2112.08804)
- **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
### Dataset Summary
We present CrossSum, a large-scale dataset
comprising 1.70 million cross-lingual article summary samples in 1500+ language-pairs
constituting 45 languages. We use the multilingual XL-Sum dataset and align identical
articles written in different languages via crosslingual retrieval using a language-agnostic
representation model.
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Languages
- `amharic`
- `arabic`
- `azerbaijani`
- `bengali`
- `burmese`
- `chinese_simplified`
- `chinese_traditional`
- `english`
- `french`
- `gujarati`
- `hausa`
- `hindi`
- `igbo`
- `indonesian`
- `japanese`
- `kirundi`
- `korean`
- `kyrgyz`
- `marathi`
- `nepali`
- `oromo`
- `pashto`
- `persian`
- `pidgin`
- `portuguese`
- `punjabi`
- `russian`
- `scottish_gaelic`
- `serbian_cyrillic`
- `serbian_latin`
- `sinhala`
- `somali`
- `spanish`
- `swahili`
- `tamil`
- `telugu`
- `thai`
- `tigrinya`
- `turkish`
- `ukrainian`
- `urdu`
- `uzbek`
- `vietnamese`
- `welsh`
- `yoruba`
## Loading the dataset
```python
from datasets import load_dataset
# for available language names, see above
src_lang = "english"
tgt_lang = "bengali"
ds = load_dataset(f"csebuetnlp/CrossSum", "{}-{}".format(src_lang, tgt_lang))
```
## Dataset Structure
### Data Instances
One example from the `English` dataset is given below in JSON format.
```
{
"source_url": "https://www.bbc.com/japanese/53074000",
"target_url": "https://www.bbc.com/bengali/news-53064712",
"summary": "বিজ্ঞানীরা বলছেন ডেক্সামেথাসোন নামে সস্তা ও সহজলভ্য একটি ওষুধ করোনাভাইরাসে গুরুতর অসুস্থ রোগীদের জীবন রক্ষা করতে সাহায্য করবে।",
"text": "ミシェル・ロバーツ、BBCニュースオンライン健康担当編集長 英オックスフォード大学の研究チームによると、低用量のデキサメタゾンは新型ウイルスとの戦いで画期的な突破口になる。 新型コロナウイルスに対し、様々な既存の治療法の効果を試す世界的規模の臨床試験の一貫として、デキサメタゾンが試された。 その結果、人工呼吸器を必要とする重症患者の致死率が3割下がり、酸素供給を必要とする患者の場合は2割下がった。 新型ウイルスのパンデミック(世界的流行)の初期からイギリスでデキサメタゾンを治療に使用していた場合、最大5000人の命が救えたはずだと研究者たちは言う。 さらに、新型コロナウイルスによる感染症「COVID-19」の患者が多く出ている貧しい国にとっても、安価なデキサメタゾンを使う治療は大いに役立つと期待される。 重症者の致死率が大幅に下がる イギリス政府は20万人分の投与量を備蓄しており、国民医療制度の国民保健サービス(NHS)で患者への使用を開始する方針を示した。 ボリス・ジョンソン英首相は「イギリス科学界の素晴らしい成果」を歓迎し、「たとえ感染の第2波が来ても備蓄が足りるよう、数を確保するための措置をとった」と述べた。 イングランド首席医務官クリス・ウィッティー教授は、「COVID-19にとってこれまでで一番重要な臨床試験結果だ。手に入りやすく安全でなじみのある薬によって、酸素供給や人工呼吸器が必要な人の致死率が大幅に下がった。(中略)この発見が世界中で人命を救う」と評価した。 <関連記事> 新型コロナウイルスに20人が感染した場合、19人は入院しないまま回復する。入院する人もほとんどは回復するものの、重症化して酸素供給や人工呼吸器を必要とする人もいる。 デキサメタゾンはこうした重症患者の治療に効果があるもよう。 新型ウイルスに感染した患者の体内では、ウイルスと戦う免疫系が暴走することがある。その免疫系の過剰反応による体の損傷を、デキサメタゾンが緩和するものとみられる。 「サイトカイン・ストーム」と呼ばれる免疫系の過剰反応が、患者の命を奪うこともある。 デキサメタゾンはすでに抗炎症剤として、ぜんそくや皮膚炎など様々な症状の治療に使われている。 初めて致死率を下げる薬 オックスフォード大学が主導する臨床試験は、約2000人の入院患者にデキサメタゾンを投与。それ以外の4000人以上の患者と容体を比較した。 人工呼吸器を使用する患者については、死亡リスクが40%から28%に下がった。 酸素供給する患者は、死亡リスクが25%から20%に下がった。 研究チームのピーター・ホービー教授は、「今のところ、致死率を実際に下げる結果が出たのは、この薬だけだ。しかも、致死率をかなり下げる。画期的な突破口だ」と話した。 研究を主導するマーティン・ランドレイ教授によると、人工呼吸器を使う患者の8人に1人、ならびに酸素供給治療を受ける患者の20-25人に1人が、デキサメタゾンで救えることが分かったという。 「これはきわめて明確なメリットだ」と教授は言う。 「最大10日間、デキサメタゾンを投与するという治療法で、費用は患者1人あたり1日約5ポンド(約670円)。つまり、35ポンド(約4700円)で人ひとりの命が救える」 「しかもこれは、世界中で手に入る薬だ」 状況が許す限り、新型コロナウイルスで入院中の患者にはただちに投与を開始すべきだと、ランドレイ教授は促した。 ただし、自宅で自己治療するために薬局に買いに行くべきではないと言う。 デキサメタゾンは、呼吸補助を必要としない軽症の患者には効果がないもよう。 3月に始動した新型コロナウイルス治療薬の無作為化臨床試験「リカバリー・トライアル」は、抗マラリア薬「ヒドロキシクロロキン」も調べたものの、心臓疾患や致死率の悪化につながるという懸念から、ヒドロキシクロロキンについては試験を中止した。 一方で、感染者の回復にかかる時間を短縮するとみられるレムデシビルは、すでにNHSの保険対象になり治療現場で使われている。 <解説> ファーガス・ウォルシュBBC健康担当編集委員 COVID-19の死者を減らすと初めて立証された薬は、高価な新しい薬ではなく、古くからずっと使われてきた、きわめて安いステロイド剤だった。 世界中の患者が直ちにその恩恵を受けることになるので、これは歓迎すべき発見だ。 この臨床試験の最新成果がこれほど急いで発表されたのは、そのためだ。とてつもない影響を世界中にもたらすので。 デキサメタゾンは1960年代初めから、関節リウマチやぜんそくなど、幅広い症状の治療に使われてきた。 これまでは、人工呼吸器を必要とするCOVID-19患者の半数が亡くなってきた。その致死率を3割減らすというのは、絶大な効果だ。 集中治療室では点滴で投与する。もう少し軽症な患者には、錠剤で与える。 これまでのところ、COVID-19患者に効果があると証明された薬は、エボラ治療薬のレムデシビルだけだった。 レムデシビルは症状の回復期間を15日から11日に短縮する。しかし、致死率を下げると言えるだけの証拠は出ていなかった。 デキサメタゾンと異なり、レムデシビルは数の少ない新薬で、薬価もまだ公表されていない。"
}
```
### Data Fields
- 'source_url': A string representing the source article URL.
- 'target_url': A string representing the target article URL.
- 'summary': A string containing the article summary.
- 'text' : A string containing the article text.
### Data Splits
No. of total examples for each language pair are as follows:
Language (ISO 639-1-Code) | am | ar | az | bn | my | zh-CN | zh-TW | en | fr | gu | ha | hi | ig | id | ja | rn | ko | ky | mr | np | om | ps | fa | pcm | pt | pa | ru | gd | sr | sr | si | so | es | sw | ta | te | th | ti | tr | uk | ur | uz | vi | cy | yo
----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | -----
am | -- | 667 | 100 | 272 | 95 | 179 | 167 | 1456 | 358 | 173 | 221 | 377 | 26 | 494 | 264 | 423 | 244 | 92 | 221 | 301 | 21 | 192 | 431 | 209 | 307 | 189 | 347 | 0 | 357 | 365 | 62 | 309 | 351 | 378 | 390 | 329 | 124 | 131 | 435 | 345 | 409 | 41 | 285 | 1 | 67
ar | 667 | -- | 787 | 804 | 652 | 2968 | 2843 | 9653 | 989 | 475 | 747 | 3665 | 86 | 6084 | 1188 | 876 | 707 | 299 | 559 | 854 | 9 | 2161 | 4186 | 436 | 2539 | 547 | 5564 | 1 | 1109 | 1145 | 315 | 1049 | 3654 | 1186 | 1311 | 877 | 367 | 27 | 4147 | 3457 | 4935 | 388 | 2666 | 38 | 141
az | 100 | 787 | -- | 277 | 84 | 371 | 334 | 1317 | 208 | 192 | 126 | 748 | 28 | 1111 | 231 | 188 | 155 | 221 | 194 | 242 | 1 | 252 | 817 | 91 | 678 | 190 | 2238 | 4 | 289 | 283 | 124 | 367 | 704 | 539 | 515 | 245 | 140 | 2 | 1495 | 1383 | 966 | 199 | 725 | 30 | 42
bn | 272 | 804 | 277 | -- | 139 | 318 | 284 | 1549 | 317 | 559 | 231 | 1396 | 35 | 1076 | 342 | 298 | 352 | 154 | 586 | 668 | 2 | 300 | 790 | 135 | 764 | 580 | 838 | 0 | 562 | 564 | 151 | 412 | 701 | 471 | 919 | 793 | 245 | 6 | 860 | 688 | 1382 | 98 | 527 | 37 | 61
my | 95 | 652 | 84 | 139 | -- | 356 | 314 | 685 | 90 | 96 | 74 | 528 | 12 | 761 | 144 | 100 | 112 | 58 | 89 | 152 | 1 | 234 | 426 | 39 | 230 | 86 | 535 | 0 | 115 | 123 | 87 | 79 | 431 | 86 | 185 | 147 | 71 | 4 | 449 | 350 | 591 | 62 | 447 | 4 | 12
zh-CN | 179 | 2968 | 371 | 318 | 356 | -- | 47101 | 4975 | 348 | 201 | 159 | 1379 | 38 | 2851 | 1017 | 240 | 412 | 139 | 240 | 275 | 14 | 559 | 1111 | 149 | 1371 | 250 | 2572 | 2 | 504 | 530 | 166 | 323 | 2002 | 412 | 511 | 353 | 269 | 11 | 1511 | 1619 | 1651 | 176 | 1858 | 33 | 39
zh-TW | 167 | 2843 | 334 | 284 | 314 | 47101 | -- | 4884 | 331 | 174 | 150 | 1213 | 35 | 2588 | 953 | 209 | 382 | 131 | 213 | 252 | 16 | 501 | 967 | 141 | 1271 | 226 | 2286 | 1 | 453 | 494 | 150 | 302 | 1873 | 383 | 465 | 335 | 250 | 12 | 1294 | 1464 | 1444 | 158 | 1663 | 31 | 38
en | 1456 | 9653 | 1317 | 1549 | 685 | 4975 | 4884 | -- | 1889 | 978 | 913 | 4728 | 144 | 10040 | 3040 | 1878 | 1673 | 490 | 1181 | 1614 | 38 | 1522 | 4680 | 1074 | 4744 | 1330 | 9080 | 128 | 3760 | 3809 | 532 | 2141 | 6910 | 2701 | 3156 | 2121 | 1020 | 58 | 5676 | 6562 | 6320 | 450 | 4574 | 2655 | 229
fr | 358 | 989 | 208 | 317 | 90 | 348 | 331 | 1889 | -- | 242 | 477 | 616 | 106 | 1018 | 274 | 735 | 264 | 124 | 241 | 323 | 4 | 196 | 602 | 439 | 921 | 247 | 849 | 2 | 555 | 569 | 98 | 502 | 990 | 872 | 425 | 380 | 185 | 10 | 829 | 721 | 766 | 76 | 438 | 40 | 159
gu | 173 | 475 | 192 | 559 | 96 | 201 | 174 | 978 | 242 | -- | 147 | 5170 | 34 | 710 | 228 | 183 | 268 | 106 | 2091 | 561 | 1 | 246 | 522 | 101 | 529 | 2210 | 582 | 0 | 331 | 345 | 125 | 261 | 540 | 300 | 1762 | 2066 | 164 | 5 | 631 | 508 | 1619 | 80 | 450 | 21 | 54
ha | 221 | 747 | 126 | 231 | 74 | 159 | 150 | 913 | 477 | 147 | -- | 460 | 202 | 901 | 157 | 485 | 135 | 61 | 159 | 239 | 5 | 229 | 487 | 529 | 375 | 157 | 525 | 1 | 258 | 258 | 49 | 391 | 463 | 568 | 299 | 260 | 87 | 9 | 519 | 400 | 526 | 59 | 352 | 30 | 362
hi | 377 | 3665 | 748 | 1396 | 528 | 1379 | 1213 | 4728 | 616 | 5170 | 460 | -- | 65 | 5627 | 623 | 489 | 520 | 234 | 3831 | 1357 | 4 | 1519 | 5351 | 192 | 6563 | 4052 | 4622 | 1 | 809 | 807 | 449 | 747 | 2931 | 893 | 3711 | 3762 | 378 | 7 | 3694 | 3935 | 15666 | 352 | 3738 | 77 | 79
ig | 26 | 86 | 28 | 35 | 12 | 38 | 35 | 144 | 106 | 34 | 202 | 65 | -- | 113 | 24 | 107 | 32 | 16 | 51 | 36 | 3 | 11 | 49 | 255 | 61 | 39 | 79 | 0 | 51 | 51 | 13 | 77 | 91 | 151 | 52 | 54 | 18 | 5 | 91 | 83 | 61 | 15 | 65 | 6 | 296
id | 494 | 6084 | 1111 | 1076 | 761 | 2851 | 2588 | 10040 | 1018 | 710 | 901 | 5627 | 113 | -- | 1274 | 994 | 774 | 347 | 745 | 1104 | 8 | 1430 | 3892 | 367 | 4409 | 725 | 7588 | 7 | 1387 | 1379 | 470 | 1312 | 4547 | 1873 | 1886 | 1131 | 599 | 9 | 5663 | 4829 | 6476 | 432 | 4810 | 145 | 174
ja | 264 | 1188 | 231 | 342 | 144 | 1017 | 953 | 3040 | 274 | 228 | 157 | 623 | 24 | 1274 | -- | 372 | 654 | 140 | 302 | 424 | 2 | 266 | 1014 | 152 | 706 | 269 | 1517 | 2 | 550 | 571 | 109 | 387 | 950 | 425 | 641 | 425 | 305 | 5 | 1242 | 1013 | 797 | 49 | 908 | 25 | 33
rn | 423 | 876 | 188 | 298 | 100 | 240 | 209 | 1878 | 735 | 183 | 485 | 489 | 107 | 994 | 372 | -- | 283 | 106 | 242 | 369 | 18 | 228 | 684 | 398 | 526 | 206 | 711 | 0 | 443 | 450 | 77 | 584 | 607 | 1186 | 521 | 363 | 149 | 13 | 724 | 610 | 617 | 59 | 631 | 20 | 180
ko | 244 | 707 | 155 | 352 | 112 | 412 | 382 | 1673 | 264 | 268 | 135 | 520 | 32 | 774 | 654 | 283 | -- | 99 | 319 | 445 | 1 | 150 | 596 | 130 | 587 | 264 | 649 | 0 | 522 | 543 | 81 | 234 | 613 | 324 | 541 | 452 | 197 | 5 | 680 | 616 | 532 | 54 | 530 | 12 | 45
ky | 92 | 299 | 221 | 154 | 58 | 139 | 131 | 490 | 124 | 106 | 61 | 234 | 16 | 347 | 140 | 106 | 99 | -- | 107 | 167 | 4 | 102 | 252 | 59 | 251 | 118 | 1013 | 1 | 206 | 211 | 45 | 145 | 279 | 150 | 206 | 174 | 109 | 3 | 346 | 508 | 270 | 113 | 201 | 12 | 23
mr | 221 | 559 | 194 | 586 | 89 | 240 | 213 | 1181 | 241 | 2091 | 159 | 3831 | 51 | 745 | 302 | 242 | 319 | 107 | -- | 630 | 1 | 232 | 608 | 138 | 524 | 1797 | 675 | 0 | 419 | 436 | 129 | 270 | 603 | 332 | 1776 | 1886 | 196 | 11 | 706 | 596 | 1395 | 79 | 473 | 16 | 48
np | 301 | 854 | 242 | 668 | 152 | 275 | 252 | 1614 | 323 | 561 | 239 | 1357 | 36 | 1104 | 424 | 369 | 445 | 167 | 630 | -- | 1 | 303 | 916 | 134 | 706 | 545 | 849 | 2 | 553 | 538 | 164 | 420 | 687 | 513 | 994 | 741 | 217 | 7 | 930 | 741 | 1156 | 84 | 719 | 39 | 65
om | 21 | 9 | 1 | 2 | 1 | 14 | 16 | 38 | 4 | 1 | 5 | 4 | 3 | 8 | 2 | 18 | 1 | 4 | 1 | 1 | -- | 2 | 3 | 11 | 4 | 6 | 8 | 0 | 2 | 3 | 0 | 6 | 7 | 5 | 2 | 2 | 1 | 103 | 5 | 10 | 1 | 4 | 2 | 0 | 7
ps | 192 | 2161 | 252 | 300 | 234 | 559 | 501 | 1522 | 196 | 246 | 229 | 1519 | 11 | 1430 | 266 | 228 | 150 | 102 | 232 | 303 | 2 | -- | 2815 | 94 | 594 | 249 | 1246 | 0 | 235 | 242 | 156 | 304 | 766 | 314 | 441 | 314 | 92 | 8 | 1049 | 818 | 2833 | 156 | 657 | 7 | 32
fa | 431 | 4186 | 817 | 790 | 426 | 1111 | 967 | 4680 | 602 | 522 | 487 | 5351 | 49 | 3892 | 1014 | 684 | 596 | 252 | 608 | 916 | 3 | 2815 | -- | 186 | 5512 | 541 | 4328 | 0 | 1028 | 1023 | 276 | 812 | 2512 | 1002 | 1250 | 797 | 364 | 8 | 3695 | 3567 | 6752 | 313 | 3190 | 66 | 74
pcm | 209 | 436 | 91 | 135 | 39 | 149 | 141 | 1074 | 439 | 101 | 529 | 192 | 255 | 367 | 152 | 398 | 130 | 59 | 138 | 134 | 11 | 94 | 186 | -- | 227 | 112 | 322 | 0 | 234 | 246 | 28 | 219 | 314 | 436 | 232 | 162 | 85 | 28 | 287 | 280 | 232 | 18 | 170 | 9 | 462
pt | 307 | 2539 | 678 | 764 | 230 | 1371 | 1271 | 4744 | 921 | 529 | 375 | 6563 | 61 | 4409 | 706 | 526 | 587 | 251 | 524 | 706 | 4 | 594 | 5512 | 227 | -- | 579 | 4452 | 7 | 1371 | 1341 | 231 | 602 | 7112 | 983 | 1042 | 820 | 468 | 3 | 3483 | 4421 | 6759 | 186 | 3754 | 110 | 97
pa | 189 | 547 | 190 | 580 | 86 | 250 | 226 | 1330 | 247 | 2210 | 157 | 4052 | 39 | 725 | 269 | 206 | 264 | 118 | 1797 | 545 | 6 | 249 | 541 | 112 | 579 | -- | 629 | 0 | 410 | 404 | 128 | 283 | 585 | 357 | 1726 | 1892 | 200 | 10 | 643 | 570 | 1515 | 73 | 431 | 16 | 44
ru | 347 | 5564 | 2238 | 838 | 535 | 2572 | 2286 | 9080 | 849 | 582 | 525 | 4622 | 79 | 7588 | 1517 | 711 | 649 | 1013 | 675 | 849 | 8 | 1246 | 4328 | 322 | 4452 | 629 | -- | 5 | 1495 | 1460 | 373 | 1166 | 4864 | 1672 | 1628 | 892 | 595 | 7 | 6223 | 22241 | 5309 | 809 | 3963 | 134 | 125
gd | 0 | 1 | 4 | 0 | 0 | 2 | 1 | 128 | 2 | 0 | 1 | 1 | 0 | 7 | 2 | 0 | 0 | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 7 | 0 | 5 | -- | 2 | 3 | 2 | 1 | 3 | 1 | 0 | 0 | 1 | 0 | 6 | 5 | 2 | 1 | 3 | 36 | 2
sr | 357 | 1109 | 289 | 562 | 115 | 504 | 453 | 3760 | 555 | 331 | 258 | 809 | 51 | 1387 | 550 | 443 | 522 | 206 | 419 | 553 | 2 | 235 | 1028 | 234 | 1371 | 410 | 1495 | 2 | -- | 9041 | 127 | 377 | 1235 | 574 | 761 | 691 | 340 | 6 | 1247 | 1512 | 1021 | 109 | 685 | 42 | 69
sr | 365 | 1145 | 283 | 564 | 123 | 530 | 494 | 3809 | 569 | 345 | 258 | 807 | 51 | 1379 | 571 | 450 | 543 | 211 | 436 | 538 | 3 | 242 | 1023 | 246 | 1341 | 404 | 1460 | 3 | 9041 | -- | 137 | 382 | 1260 | 568 | 775 | 699 | 347 | 10 | 1229 | 1498 | 1009 | 112 | 639 | 45 | 79
si | 62 | 315 | 124 | 151 | 87 | 166 | 150 | 532 | 98 | 125 | 49 | 449 | 13 | 470 | 109 | 77 | 81 | 45 | 129 | 164 | 0 | 156 | 276 | 28 | 231 | 128 | 373 | 2 | 127 | 137 | -- | 137 | 260 | 189 | 348 | 173 | 69 | 7 | 301 | 306 | 510 | 38 | 216 | 5 | 15
so | 309 | 1049 | 367 | 412 | 79 | 323 | 302 | 2141 | 502 | 261 | 391 | 747 | 77 | 1312 | 387 | 584 | 234 | 145 | 270 | 420 | 6 | 304 | 812 | 219 | 602 | 283 | 1166 | 1 | 377 | 382 | 137 | -- | 689 | 1020 | 723 | 384 | 178 | 19 | 968 | 875 | 1000 | 75 | 724 | 20 | 116
es | 351 | 3654 | 704 | 701 | 431 | 2002 | 1873 | 6910 | 990 | 540 | 463 | 2931 | 91 | 4547 | 950 | 607 | 613 | 279 | 603 | 687 | 7 | 766 | 2512 | 314 | 7112 | 585 | 4864 | 3 | 1235 | 1260 | 260 | 689 | -- | 1047 | 1073 | 827 | 469 | 10 | 3645 | 3130 | 3060 | 290 | 2330 | 59 | 133
sw | 378 | 1186 | 539 | 471 | 86 | 412 | 383 | 2701 | 872 | 300 | 568 | 893 | 151 | 1873 | 425 | 1186 | 324 | 150 | 332 | 513 | 5 | 314 | 1002 | 436 | 983 | 357 | 1672 | 1 | 574 | 568 | 189 | 1020 | 1047 | -- | 929 | 492 | 261 | 10 | 1348 | 1309 | 1253 | 90 | 936 | 37 | 219
ta | 390 | 1311 | 515 | 919 | 185 | 511 | 465 | 3156 | 425 | 1762 | 299 | 3711 | 52 | 1886 | 641 | 521 | 541 | 206 | 1776 | 994 | 2 | 441 | 1250 | 232 | 1042 | 1726 | 1628 | 0 | 761 | 775 | 348 | 723 | 1073 | 929 | -- | 2278 | 400 | 14 | 1486 | 1423 | 2404 | 134 | 1092 | 32 | 68
te | 329 | 877 | 245 | 793 | 147 | 353 | 335 | 2121 | 380 | 2066 | 260 | 3762 | 54 | 1131 | 425 | 363 | 452 | 174 | 1886 | 741 | 2 | 314 | 797 | 162 | 820 | 1892 | 892 | 0 | 691 | 699 | 173 | 384 | 827 | 492 | 2278 | -- | 306 | 11 | 893 | 832 | 1748 | 107 | 644 | 21 | 61
th | 124 | 367 | 140 | 245 | 71 | 269 | 250 | 1020 | 185 | 164 | 87 | 378 | 18 | 599 | 305 | 149 | 197 | 109 | 196 | 217 | 1 | 92 | 364 | 85 | 468 | 200 | 595 | 1 | 340 | 347 | 69 | 178 | 469 | 261 | 400 | 306 | -- | 5 | 477 | 480 | 414 | 37 | 357 | 10 | 26
ti | 131 | 27 | 2 | 6 | 4 | 11 | 12 | 58 | 10 | 5 | 9 | 7 | 5 | 9 | 5 | 13 | 5 | 3 | 11 | 7 | 103 | 8 | 8 | 28 | 3 | 10 | 7 | 0 | 6 | 10 | 7 | 19 | 10 | 10 | 14 | 11 | 5 | -- | 8 | 8 | 4 | 2 | 5 | 0 | 6
tr | 435 | 4147 | 1495 | 860 | 449 | 1511 | 1294 | 5676 | 829 | 631 | 519 | 3694 | 91 | 5663 | 1242 | 724 | 680 | 346 | 706 | 930 | 5 | 1049 | 3695 | 287 | 3483 | 643 | 6223 | 6 | 1247 | 1229 | 301 | 968 | 3645 | 1348 | 1486 | 893 | 477 | 8 | -- | 4108 | 4340 | 370 | 2981 | 126 | 130
uk | 345 | 3457 | 1383 | 688 | 350 | 1619 | 1464 | 6562 | 721 | 508 | 400 | 3935 | 83 | 4829 | 1013 | 610 | 616 | 508 | 596 | 741 | 10 | 818 | 3567 | 280 | 4421 | 570 | 22241 | 5 | 1512 | 1498 | 306 | 875 | 3130 | 1309 | 1423 | 832 | 480 | 8 | 4108 | -- | 4290 | 442 | 3017 | 108 | 89
ur | 409 | 4935 | 966 | 1382 | 591 | 1651 | 1444 | 6320 | 766 | 1619 | 526 | 15666 | 61 | 6476 | 797 | 617 | 532 | 270 | 1395 | 1156 | 1 | 2833 | 6752 | 232 | 6759 | 1515 | 5309 | 2 | 1021 | 1009 | 510 | 1000 | 3060 | 1253 | 2404 | 1748 | 414 | 4 | 4340 | 4290 | -- | 389 | 3723 | 72 | 88
uz | 41 | 388 | 199 | 98 | 62 | 176 | 158 | 450 | 76 | 80 | 59 | 352 | 15 | 432 | 49 | 59 | 54 | 113 | 79 | 84 | 4 | 156 | 313 | 18 | 186 | 73 | 809 | 1 | 109 | 112 | 38 | 75 | 290 | 90 | 134 | 107 | 37 | 2 | 370 | 442 | 389 | -- | 257 | 10 | 15
vi | 285 | 2666 | 726 | 527 | 447 | 1858 | 1663 | 4575 | 438 | 450 | 352 | 3738 | 65 | 4810 | 908 | 631 | 530 | 201 | 473 | 719 | 2 | 657 | 3190 | 170 | 3755 | 431 | 3963 | 3 | 685 | 639 | 216 | 724 | 2330 | 936 | 1092 | 644 | 357 | 5 | 2982 | 3017 | 3723 | 257 | -- | 106 | 76
cy | 1 | 38 | 30 | 37 | 4 | 33 | 31 | 2655 | 40 | 21 | 30 | 77 | 6 | 145 | 25 | 20 | 12 | 12 | 16 | 39 | 0 | 7 | 66 | 9 | 110 | 16 | 134 | 36 | 42 | 45 | 5 | 20 | 59 | 37 | 32 | 21 | 10 | 0 | 126 | 108 | 72 | 10 | 106 | -- | 8
yo | 67 | 141 | 42 | 61 | 12 | 39 | 38 | 229 | 159 | 54 | 362 | 79 | 296 | 174 | 33 | 180 | 45 | 23 | 48 | 65 | 7 | 32 | 74 | 462 | 97 | 44 | 125 | 2 | 69 | 79 | 15 | 116 | 133 | 219 | 68 | 61 | 26 | 6 | 130 | 89 | 88 | 15 | 76 | 8 | --
## Dataset Creation
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Source Data
[BBC News](https://www.bbc.co.uk/ws/languages)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
### Annotations
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
#### Annotation process
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
#### Who are the annotators?
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/CrossSum)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/CrossSum)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. |
true | # Dataset Card for eoir_privacy
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset mimics privacy standards for EOIR decisions. It is meant to help learn contextual data sanitization rules to anonymize potentially sensitive contexts in crawled language data.
### Languages
English
## Dataset Structure
### Data Instances
{
"text" : masked paragraph,
"label" : whether to use a pseudonym in filling masks
}
### Data Splits
train 75%, validation 25%
## Dataset Creation
### Curation Rationale
This dataset mimics privacy standards for EOIR decisions. It is meant to help learn contextual data sanitization rules to anonymize potentially sensitive contexts in crawled language data.
### Source Data
#### Initial Data Collection and Normalization
We scrape EOIR. We then filter at the paragraph level and replace any references to respondent, applicant, or names with [MASK] tokens. We then determine if the case used a pseudonym or not.
#### Who are the source language producers?
U.S. Executive Office for Immigration Review
### Annotations
#### Annotation process
Annotations (i.e., pseudonymity decisions) were made by the EOIR court. We use regex to identify if a pseudonym was used to refer to the applicant/respondent.
#### Who are the annotators?
EOIR judges.
### Personal and Sensitive Information
There may be sensitive contexts involved, the courts already make a determination as to data filtering of sensitive data, but nonetheless there may be sensitive topics discussed.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to learn contextual privacy rules to help filter private/sensitive data, but itself encodes biases of the courts from which the data came. We suggest that people look beyond this data for learning more contextual privacy rules.
### Discussion of Biases
Data may be biased due to its origin in U.S. immigration courts.
### Licensing Information
CC-BY-NC
### Citation Information
```
@misc{hendersonkrass2022pileoflaw,
url = {https://arxiv.org/abs/2207.00220},
author = {Henderson, Peter and Krass, Mark S. and Zheng, Lucia and Guha, Neel and Manning, Christopher D. and Jurafsky, Dan and Ho, Daniel E.},
title = {Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset},
publisher = {arXiv},
year = {2022}
}
``` |
false |
# Dataset Card for Haiku Data
|
false |
# POS Tagging Dataset
## Original Data Source
#### Conll2003
E. F. Tjong Kim Sang and F. De Meulder, Proceedings of the
Seventh Conference on Natural Language Learning at HLT-
NAACL 2003, 2003, pp. 142–147.
#### The Peen Treebank
M. P. Marcus, B. Santorini and M. A. Marcinkiewicz, Comput.
Linguist., 1993, 19, 313–330.
## Citation
BatteryDataExtractor: battery-aware text-mining software embedded with BERT models |
false | # Dataset Card for "ERRnews"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
-
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** https://www.bjmc.lu.lv/fileadmin/user_upload/lu_portal/projekti/bjmc/Contents/10_3_23_Harm.pdf
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
ERRnews is an estonian language summaryzation dataset of ERR News broadcasts scraped from the ERR Archive (https://arhiiv.err.ee/err-audioarhiiv). The dataset consists of news story transcripts generated by an ASR pipeline paired with the human written summary from the archive. For leveraging larger english models the dataset includes machine translated (https://neurotolge.ee/) transcript and summary pairs.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Estonian
## Dataset Structure
### Data Instances
```
{'name': 'Kütuseaktsiis Balti riikides on erinev.', 'summary': 'Eestis praeguse plaani järgi järgmise aasta maini kehtiv madalam diislikütuse aktsiis ei ajenda enam tankima Lätis, kuid bensiin on seal endiselt odavam. Peaminister Kaja Kallas ja kütusemüüjad on eri meelt selles, kui suurel määral mõjutab aktsiis lõpphinda tanklais.', 'transcript': 'Eesti-Läti piiri alal on kütusehinna erinevus eriti märgatav ja ka tuntav. Õigema pildi saamiseks tuleks võrrelda ühe keti keskmist hinda, kuna tanklati võib see erineda Circle K. [...] Olulisel määral mõjutab hinda kütuste sisseost, räägib kartvski. On selge, et maailmaturuhinna põhjal tehtud ost Tallinnas erineb kütusehinnast Riias või Vilniuses või Varssavis. Kolmas mõjur ja oluline mõjur on biolisandite kasutamise erinevad nõuded riikide vahel.', 'url': 'https://arhiiv.err.ee//vaata/uudised-kutuseaktsiis-balti-riikides-on-erinev', 'meta': '\n\n\nSarja pealkiri:\nuudised\n\n\nFonoteegi number:\nRMARH-182882\n\n\nFonogrammi tootja:\n2021 ERR\n\n\nEetris:\n16.09.2021\n\n\nSalvestuskoht:\nRaadiouudised\n\n\nKestus:\n00:02:34\n\n\nEsinejad:\nKond Ragnar, Vahtrik Raimo, Kallas Kaja, Karcevskis Ojars\n\n\nKategooria:\nUudised → uudised, muu\n\n\nPüsiviide:\n\nvajuta siia\n\n\n\n', 'audio': {'path': 'recordings/12049.ogv', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ..., 2.44576868e-06, 6.38223427e-06, 0.00000000e+00]), 'sampling_rate': 16000}, 'recording_id': 12049}
```
### Data Fields
```
name: News story headline
summary: Hand written summary.
transcript: Automatically generated transcript from the audio file with an ASR system.
url: ERR archive URL.
meta: ERR archive metadata.
en_summary: Machine translated English summary.
en_transcript: Machine translated English transcript.
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
recording_id: Audio file id.
```
### Data Splits
|train|validation|test|
|:----|:---------|:---|
|10420|523|523|
### BibTeX entry and citation info
```bibtex
article{henryabstractive,
title={Abstractive Summarization of Broadcast News Stories for {Estonian}},
author={Henry, H{\"a}rm and Tanel, Alum{\"a}e},
journal={Baltic J. Modern Computing},
volume={10},
number={3},
pages={511-524},
year={2022}
}
```
|
true |
# Dataset Card for pile-pii-scrubadub
## Dataset Description
- **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives**
- **Paper: Arxiv link to be added**
### Dataset Summary
This dataset contains text from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on the personal idenfitiable information (PII) in each sentence.
Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the percentage of words in it that are classified as PII by [Scrubadub](https://scrubadub.readthedocs.io/en/stable/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is taken from [The Pile](https://huggingface.co/datasets/the_pile), which is English text.
## Dataset Structure
### Data Instances
1949977
### Data Fields
- texts (sequence): a list of the sentences in the document (segmented using [SpaCy](https://spacy.io/))
- meta (dict): the section of [The Pile](https://huggingface.co/datasets/the_pile) from which it originated
- scores (sequence): a score for each sentence in the `texts` column indicating the percent of words that are detected as PII by [Scrubadub](https://scrubadub.readthedocs.io/en/stable/)
- avg_score (float64): the average of the scores listed in the `scores` column
- num_sents (int64): the number of sentences (and scores) in that document
### Data Splits
Training set only
## Dataset Creation
### Curation Rationale
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile), a large dataset of text in English. The PII is labeled so that generative language models can be trained to avoid generating PII.
### Source Data
#### Initial Data Collection and Normalization
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile).
#### Who are the source language producers?
Please see [The Pile](https://huggingface.co/datasets/the_pile) for the source of the dataset.
### Annotations
#### Annotation process
For each sentence, [Scrubadub](https://scrubadub.readthedocs.io/en/stable/) was used to detect:
- email addresses
- addresses and postal codes
- phone numbers
- credit card numbers
- US social security numbers
- vehicle plates numbers
- dates of birth
- URLs
- login credentials
#### Who are the annotators?
[Scrubadub](https://scrubadub.readthedocs.io/en/stable/)
### Personal and Sensitive Information
This dataset contains all PII that was originally contained in [The Pile](https://huggingface.co/datasets/the_pile), with all detected PII annotated.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contains examples of real PII (conveniently annotated in the text!). Please take care to avoid misusing it or putting anybody in danger by publicizing their information.
This dataset is intended for research purposes only. We cannot guarantee that all PII has been detected, and we cannot guarantee that models trained using it will avoid generating PII.
We do not recommend deploying models trained on this data.
### Discussion of Biases
This dataset contains all biases from The Pile discussed in their paper: https://arxiv.org/abs/2101.00027
### Other Known Limitations
The PII in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.
## Additional Information
### Dataset Curators
[The Pile](https://huggingface.co/datasets/the_pile)
### Licensing Information
From [The Pile](https://huggingface.co/datasets/the_pile): PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
### Citation Information
Paper information to be added
### Contributions
[The Pile](https://huggingface.co/datasets/the_pile) |
false | # Dataset Card for "stackoverflow_linux"
Dataset information:
- Source: Stack Overflow
- Category: Linux
- Number of samples: 300
- Train/Test split: 270/30
- Quality: Data come from the top 1k most upvoted questions
## Additional Information
### License
All Stack Overflow user contributions are licensed under CC-BY-SA 3.0 with attribution required.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
# scientific_lay_summarisation - elife - normalized
This is the "_elife_" split. For more words, refer to the [PLOS split README](https://huggingface.co/datasets/pszemraj/scientific_lay_summarisation-plos-norm)
## Contents
load with datasets:
```python
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("pszemraj/scientific_lay_summarisation-elife-norm")
dataset
```
Output:
```python
DatasetDict({
train: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 4346
})
test: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 241
})
validation: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 241
})
})
```
## Lengths
Train set:

|
false | # Dataset Card for Food-101-Enriched (Enhanced by Renumics)
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=food101-enriched)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [data.vision.ee.ethz.ch](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/)
- **Paper:** [Food-101 – Mining Discriminative Components with Random Forests](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/static/bossard_eccv14_food-101.pdf)
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=food101-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [Food101 Data Set](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/).
### Explore the Dataset

The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets
```
Load the dataset from huggingface in your notebook:
```python
import datasets
dataset = datasets.load_dataset("renumics/food101-enriched", split="train")
```
Start exploring with a simple view:
```python
from renumics import spotlight
df_show = dataset.to_pandas()
spotlight.show(df_show, port=8000, dtype={"image": spotlight.Image})
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
### Food101 Dataset
This data set contains 101'000 images from 101 food categories.
For each class, 250 manually reviewed test images are provided as well as 750 training images.
On purpose, the training images were not cleaned, and thus still contain some amount of noise.
This comes mostly in the form of intense colors and sometimes wrong labels.
All images were rescaled to have a maximum side length of 512 pixels.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of a dish into one of 101 classes. The leaderboard is available [here](https://paperswithcode.com/sota/fine-grained-image-classification-on-food-101).
### Languages
English class labels.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
"image": "/huggingface/datasets/downloads/extracted/49750366cbaf225ce1b5a5c033fa85ceddeee2e82f1d6e0365e8287859b4c7c8/0/0.jpg",
"label": 6,
"label_str": "beignets",
"split": "train"
}
```
<details>
<summary>Class Label Mappings</summary>
```json
{
"apple_pie": 0,
"baby_back_ribs": 1,
"baklava": 2,
"beef_carpaccio": 3,
"beef_tartare": 4,
"beet_salad": 5,
"beignets": 6,
"bibimbap": 7,
"bread_pudding": 8,
"breakfast_burrito": 9,
"bruschetta": 10,
"caesar_salad": 11,
"cannoli": 12,
"caprese_salad": 13,
"carrot_cake": 14,
"ceviche": 15,
"cheesecake": 16,
"cheese_plate": 17,
"chicken_curry": 18,
"chicken_quesadilla": 19,
"chicken_wings": 20,
"chocolate_cake": 21,
"chocolate_mousse": 22,
"churros": 23,
"clam_chowder": 24,
"club_sandwich": 25,
"crab_cakes": 26,
"creme_brulee": 27,
"croque_madame": 28,
"cup_cakes": 29,
"deviled_eggs": 30,
"donuts": 31,
"dumplings": 32,
"edamame": 33,
"eggs_benedict": 34,
"escargots": 35,
"falafel": 36,
"filet_mignon": 37,
"fish_and_chips": 38,
"foie_gras": 39,
"french_fries": 40,
"french_onion_soup": 41,
"french_toast": 42,
"fried_calamari": 43,
"fried_rice": 44,
"frozen_yogurt": 45,
"garlic_bread": 46,
"gnocchi": 47,
"greek_salad": 48,
"grilled_cheese_sandwich": 49,
"grilled_salmon": 50,
"guacamole": 51,
"gyoza": 52,
"hamburger": 53,
"hot_and_sour_soup": 54,
"hot_dog": 55,
"huevos_rancheros": 56,
"hummus": 57,
"ice_cream": 58,
"lasagna": 59,
"lobster_bisque": 60,
"lobster_roll_sandwich": 61,
"macaroni_and_cheese": 62,
"macarons": 63,
"miso_soup": 64,
"mussels": 65,
"nachos": 66,
"omelette": 67,
"onion_rings": 68,
"oysters": 69,
"pad_thai": 70,
"paella": 71,
"pancakes": 72,
"panna_cotta": 73,
"peking_duck": 74,
"pho": 75,
"pizza": 76,
"pork_chop": 77,
"poutine": 78,
"prime_rib": 79,
"pulled_pork_sandwich": 80,
"ramen": 81,
"ravioli": 82,
"red_velvet_cake": 83,
"risotto": 84,
"samosa": 85,
"sashimi": 86,
"scallops": 87,
"seaweed_salad": 88,
"shrimp_and_grits": 89,
"spaghetti_bolognese": 90,
"spaghetti_carbonara": 91,
"spring_rolls": 92,
"steak": 93,
"strawberry_shortcake": 94,
"sushi": 95,
"tacos": 96,
"takoyaki": 97,
"tiramisu": 98,
"tuna_tartare": 99,
"waffles": 100
}
```
</details>
### Data Fields
| Feature | Data Type |
|---------------------------------|-----------------------------------------------|
| image | Image(decode=True, id=None) |
| split | Value(dtype='string', id=None) |
| label | ClassLabel(names=[...], id=None) |
| label_str | Value(dtype='string', id=None) |
### Data Splits
| Dataset Split | Number of Images in Split |
| ------------- |---------------------------|
| Train | 75750 |
| Test | 25250 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The Food-101 data set consists of images from Foodspotting [1] which are not property of the Federal Institute of Technology Zurich (ETHZ). Any use beyond scientific fair use must be negociated with the respective picture owners according to the Foodspotting terms of use [2].
[1] [http://www.foodspotting.com/](http://www.foodspotting.com/)
[2] [http://www.foodspotting.com/terms/](http://www.foodspotting.com/terms/)
### Citation Information
If you use this dataset, please cite the following paper:
```
@inproceedings{bossard14,
title = {Food-101 -- Mining Discriminative Components with Random Forests},
author = {Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc},
booktitle = {European Conference on Computer Vision},
year = {2014}
}
```
### Contributions
Lukas Bossard, Matthieu Guillaumin, Luc Van Gool, and Renumics GmbH. |
true |
# Dataset Card for "rumoureval_2019"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://competitions.codalab.org/competitions/19938](https://competitions.codalab.org/competitions/19938)
- **Repository:** [https://figshare.com/articles/dataset/RumourEval_2019_data/8845580](https://figshare.com/articles/dataset/RumourEval_2019_data/8845580)
- **Paper:** [https://aclanthology.org/S19-2147/](https://aclanthology.org/S19-2147/), [https://arxiv.org/abs/1809.06683](https://arxiv.org/abs/1809.06683)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
Stance prediction task in English. The goal is to predict whether a given reply to a claim either supports, denies, questions, or simply comments on the claim. Ran as a SemEval task in 2019.
### Supported Tasks and Leaderboards
* SemEval 2019 task 1
### Languages
English of various origins, bcp47: `en`
## Dataset Structure
### Data Instances
#### polstance
An example of 'train' looks as follows.
```
{
'id': '0',
'source_text': 'Appalled by the attack on Charlie Hebdo in Paris, 10 - probably journalists - now confirmed dead. An attack on free speech everywhere.',
'reply_text': '@m33ryg @tnewtondunn @mehdirhasan Of course it is free speech, that\'s the definition of "free speech" to openly make comments or draw a pic!',
'label': 3
}
```
### Data Fields
- `id`: a `string` feature.
- `source_text`: a `string` expressing a claim/topic.
- `reply_text`: a `string` to be classified for its stance to the source.
- `label`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "support",
1: "deny",
2: "query",
3: "comment"
```
- `quoteID`: a `string` of the internal quote ID.
- `party`: a `string` describing the party affiliation of the quote utterer at the time of utterance.
- `politician`: a `string` naming the politician who uttered the quote.
### Data Splits
| name |instances|
|---------|----:|
|train|7 005|
|dev|2 425|
|test|2 945|
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Twitter users
### Annotations
#### Annotation process
Detailed in [Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads](https://journals.plos.org/plosone/article/authors?id=10.1371/journal.pone.0150989)
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{gorrell-etal-2019-semeval,
title = "{S}em{E}val-2019 Task 7: {R}umour{E}val, Determining Rumour Veracity and Support for Rumours",
author = "Gorrell, Genevieve and
Kochkina, Elena and
Liakata, Maria and
Aker, Ahmet and
Zubiaga, Arkaitz and
Bontcheva, Kalina and
Derczynski, Leon",
booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S19-2147",
doi = "10.18653/v1/S19-2147",
pages = "845--854",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
false | # XLCost for text-to-code synthesis
## Dataset Description
This is a subset of [XLCoST benchmark](https://github.com/reddy-lab-code-research/XLCoST), for text-to-code generation at program level for **2** programming languages: `Python, C++`. This dataset is based on [codeparrot/xlcost-text-to-code](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code) with the following improvements:
* NEWLINE, INDENT and DEDENT were replaced with the corresponding ASCII codes.
* the code text has been reformatted using autopep8 for Python and clang-format for cpp.
* new columns have been introduced to allow evaluation using pass@k metric.
* programs containing more than one function call in the driver code were removed
## Languages
The dataset contains text in English and its corresponding code translation. The text contains a set of concatenated code comments that allow to synthesize the program.
## Dataset Structure
To load the dataset you need to specify the language(Python or C++).
```python
from datasets import load_dataset
load_dataset("giulio98/xlcost-single-prompt", "Python")
DatasetDict({
train: Dataset({
features: ['text', 'context', 'code', 'test', 'output', 'fn_call'],
num_rows: 8306
})
test: Dataset({
features: ['text', 'context', 'code', 'test', 'output', 'fn_call'],
num_rows: 812
})
validation: Dataset({
features: ['text', 'context', 'code', 'test', 'output', 'fn_call'],
num_rows: 427
})
})
```
## Data Fields
* text: natural language description.
* context: import libraries/global variables.
* code: code at program level.
* test: test function call.
* output: expected output of the function call.
* fn_call: name of the function to call.
## Data Splits
Each subset has three splits: train, test and validation.
## Citation Information
```
@misc{zhu2022xlcost,
title = {XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence},
url = {https://arxiv.org/abs/2206.08474},
author = {Zhu, Ming and Jain, Aneesh and Suresh, Karthik and Ravindran, Roshan and Tipirneni, Sindhu and Reddy, Chandan K.},
year = {2022},
eprint={2206.08474},
archivePrefix={arXiv}
}
``` |
false |
# Fork of [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context)
#### Overview
This dataset builds from [WikiSQL](https://huggingface.co/datasets/wikisql) and [Spider](https://huggingface.co/datasets/spider).
There are 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL Query answering the question using the CREATE statement as context. This dataset was built with text-to-sql LLMs in mind, intending to prevent hallucination of column and table names often seen when trained on text-to-sql datasets. The CREATE TABLE statement can often be copy and pasted from different DBMS and provides table names, column names and their data types. By providing just the CREATE TABLE statement as context, we can hopefully provide better grounding for models without having to provide actual rows of data, limiting token usage and exposure to private, sensitive, or proprietary data.
#### Cleansing and Augmentation
Cleansing and data augmentation has been done on the combined WikiSQL and Spider data. I used [SQLGlot](https://github.com/tobymao/sqlglot) on queries from Spider and WikiSQL and parsed them into different tables and columns, I then inferred column data types based on usage of `>` `<` operators as well as the use of `MIN()` `MAX()` `AVG()` `SUM()` on columns. While this isn't perfect, it increases the likelihood of inferring the correct datatype for a column, the columns otherwise default to VARCHAR type. These tables and columns are then used to generate CREATE TABLE statements using the inferred types. SQLGlot is used again to ensure both the SQL queries and CREATE TABLE statements parse without errors.
Some queries that do not have column names, e.g. SELECT * FROM table, have a default Id column added to the CREATE TABLE statement. Some other queries which use the generic `table` as the FROM table have instead been changed to a variation of `table_name_1` or some other number which is also reflected in the CREATE TABLE statement.
#### TODO
- Further augment the data by converting queries and CREATE TABLE statements into different SQL dialects, this can be done with SQLGlot. Reference to the dialect might also be added to the question.
- Support other informative contexts beyond CREATE TABLE
Random sample:
```json
{
"question": "Please show the themes of competitions with host cities having populations larger than 1000.",
"context": "CREATE TABLE city (City_ID VARCHAR, Population INTEGER); CREATE TABLE farm_competition (Theme VARCHAR, Host_city_ID VARCHAR)",
"answer": "SELECT T2.Theme FROM city AS T1 JOIN farm_competition AS T2 ON T1.City_ID = T2.Host_city_ID WHERE T1.Population > 1000"
},
{
"question": "Please show the different statuses of cities and the average population of cities with each status.",
"context": "CREATE TABLE city (Status VARCHAR, Population INTEGER)",
"answer": "SELECT Status, AVG(Population) FROM city GROUP BY Status"
},
``` |
false | # Dataset Card for "xsum_eng2thai 🇬🇧🇹🇭"
- This dataset is based on [XSum](https://huggingface.co/datasets/xsum).
- The summaries were translated from English (as in the original XSum) to Thai using Meta's [NLLB-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B).
- The dataset is intended for Cross-Lingual Summarization (English Document -> Thai Summary).
### Data Fields
- `id`: BBC ID of the article.
- `document`: a string containing the body of the news article
- `summary`: a string containing a *translated* summary of the article.
## Data Structure
```
{
"id": "29750031",
"document": "news article in English",
"summary": "summary in Thai"
}
```
### Data Splits
train/validation/test = 204045/11332/11334 |
false | +15 hours of speech data from TTS and text file recording.
+9k utterances from various sources, novels, parliamentary debates, professional language. |
false | # Dataset Card for Huatuo_knowledge_graph_qa
## Dataset Description
- **Homepage: https://www.huatuogpt.cn/**
- **Repository: https://github.com/FreedomIntelligence/HuatuoGPT**
- **Paper: https://arxiv.org/abs/2305.01526**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We built this QA dataset based on the medical knowledge map, with a total of 798,444 pieces of data, in which the questions are constructed by means of templates, and the answers are the contents of the entries in the knowledge map.
## Dataset Creation
### Source Data
https://cpubmed.openi.org.cn/graph/wiki
https://github.com/zhihao-chen/QASystemOnMedicalGraph
https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph
## Citation
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
false | # Dataset Card for xOA22 - Multilingual Prompts from OpenAssistant
### Dataset Summary
x-self-instruct-seed-32 consists of 32 prompts chosen out of the 252 prompts in the [self-instruct-seed](https://huggingface.co/datasets/HuggingFaceH4/self-instruct-seed) dataset from the [Self-Instruct](https://arxiv.org/pdf/2212.10560.pdf) paper. These 32 prompts were filtered out according to the following criteria:
- Should be natural in a chat setting
- Therefore, we filter out any prompts with "few-shot examples", as these are all instruction prompts that we consider unnatural in a chat setting
- Should be well-written and easily understood
- Our intention is to use the prompts as-is, without modification, in order to maintain parity with any other experiments that use this dataset
- However, we planned to translate the prompts into multiple languages, and poorly written or confusing prompts could lead to high variance in the resulting translations
- Avoid asking for code / domain specific languages
- Responses in code or domain specific languages defeat the purpose of multilingual evaluation
- Avoid potentially simple numerical responses
- These responses would likely be the same in every language and aren't good measures of multilingual ability
- Avoid requests for translation
- A good response will always be in the same language, so these prompts defeat the purpose of translating prompts into multiple languages
- Avoid prompts that may be difficult to translate / use English-specific language constructs
- Prompts that rely on English constructs such as puns, dad jokes, or witty proverbs may not translate well to other languages
- Some concepts or pop culture references may be culture-specific and difficult to translate to other languages, e.g. knowledge about American celebrities
- Avoid duplicate prompts / prompts that are too similar
The prompts were then manually translated by volunteers into 5 languages: Arabic, Simplified Chinese, French, Hindi and Spanish.
This dataset was originally curated for use in human evaluations of the multilingual abilities of [BLOOMChat](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1). Since not all prompts could be directly translatable due to cultural and linguistic differences, volunteers were encouraged to make appropriate substitutions and modifications that would maintain the intent of the original English prompt. We make note of any major departures from the original English prompts below.
### Languages
- Arabic (ar)
- English (en)
- Spanish (es)
- French (fr)
- Hindi (hi)
- Chinese (zh)
## Dataset Structure
### Data Fields
- `prompt`: manually translated prompt text. The English split is un-modified from the OpenAssistant Converstaions paper.
### Data Splits
The x-self-instruct-seed-32 dataset has 6 splits, one for each language. Below are the statistics for each split
| Dataset Split | Number of Instances in Split |
| ------------- | ---------------------------- |
| ar | 32 |
| en | 32 |
| es | 32 |
| fr | 32 |
| hi | 32 |
| zh | 32 |
### Translation Notes
Below are notes from volunteer translators.
- Arabic
- No translation notes
- Spanish
- No translation notes
- French
- Prompt 20: Not sure how to directly translate “come up with” in this context. Using “provide” instead.
- Hindi
- Prompt 12: Changed Seattle city to a famous destination in India called Manali
- Chinese
- Prompt 20: Drop funny answer to make it more natural in Chinese
- Prompt 24: Rephrase a little bit to be “use humor to overcome embarrassment” to make it more natural in Chinese
### Curation Rationale
These prompts were originally curated in order to test the multilingual abilities of the BLOOMChat model. The model's responses to these translated prompts were rated on their quality in a chat setting. Therefore, emphasis was placed on making translations as natural and understandable as possible to native speakers, and we accepted feedback and modifications to the prompts from our volunteers.
### Dataset Curators
TBA
### Contributions
TBA
### Source Data
https://huggingface.co/datasets/HuggingFaceH4/self-instruct-seed |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Repository:** [https://github.com/SJTU-LIT/SynCSE/](https://github.com/SJTU-LIT/SynCSE/)
- **Paper:** [Contrastive Learning of Sentence Embeddings from Scratch](https://arxiv.org/abs/2305.15077)
### Dataset Summary
The SynCSE-scratch-NLI is a Natural Language Inference dataset generated by GPT-3.5-Turbo. You can use it to learn better sentence representation with contrastive learning. More details can be found in [paper](https://arxiv.org/abs/2305.15077) and [code](https://github.com/SJTU-LIT/SynCSE/)
### Supported Tasks and Leaderboards
Natural Language Inference
Contrastive Learning of Sentence Embeddings
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
### Data Splits
We only provide the training set. Specifically, you can use this dataset to train of model with contrastive learning and evalaute your model on a variey of downstream sentence embedding tasks.
## Dataset Creation
GPT-3.5-turbo
### Curation Rationale
[More Information Needed]
# Citation
```
@article{zhang2023contrastive,
title={Contrastive Learning of Sentence Embeddings from Scratch},
author={Zhang, Junlei and Lan, Zhenzhong and He, Junxian},
journal={arXiv preprint arXiv:2305.15077},
year={2023}
}
``` |
false |
# ELRC-Medical-V2 : European parallel corpus for healthcare machine translation
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://live.european-language-grid.eu/catalogue/project/2209
- **Repository:** https://github.com/qanastek/ELRC-Medical-V2/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
`ELRC-Medical-V2` is a parallel corpus for neural machine translation funded by the [European Commission](http://www.lr-coordination.eu/) and coordinated by the [German Research Center for Artificial Intelligence](https://www.dfki.de/web).
### Supported Tasks and Leaderboards
`translation`: The dataset can be used to train a model for translation.
### Languages
In our case, the corpora consists of a pair of source and target sentences for 23 differents languages from the European Union (EU) with as source language in each cases english (EN).
**List of languages :** `Bulgarian (bg)`,`Czech (cs)`,`Danish (da)`,`German (de)`,`Greek (el)`,`Spanish (es)`,`Estonian (et)`,`Finnish (fi)`,`French (fr)`,`Irish (ga)`,`Croatian (hr)`,`Hungarian (hu)`,`Italian (it)`,`Lithuanian (lt)`,`Latvian (lv)`,`Maltese (mt)`,`Dutch (nl)`,`Polish (pl)`,`Portuguese (pt)`,`Romanian (ro)`,`Slovak (sk)`,`Slovenian (sl)`,`Swedish (sv)`.
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
NAME = "qanastek/ELRC-Medical-V2"
dataset = load_dataset(NAME, use_auth_token=True)
print(dataset)
dataset_train = load_dataset(NAME, "en-es", split='train[:90%]')
dataset_test = load_dataset(NAME, "en-es", split='train[10%:]')
print(dataset_train)
print(dataset_train[0])
print(dataset_test)
```
## Dataset Structure
### Data Instances
```plain
id,lang,source_text,target_text
1,en-bg,"TOC \o ""1-3"" \h \z \u Introduction 3","TOC \o ""1-3"" \h \z \u Въведение 3"
2,en-bg,The international humanitarian law and its principles are often not respected.,Международното хуманитарно право и неговите принципи често не се зачитат.
3,en-bg,"At policy level, progress was made on several important initiatives.",На равнище политики напредък е постигнат по няколко важни инициативи.
```
### Data Fields
**id** : The document identifier of type `Integer`.
**lang** : The pair of source and target language of type `String`.
**source_text** : The source text of type `String`.
**target_text** : The target text of type `String`.
### Data Splits
| Lang | # Docs | Avg. # Source Tokens | Avg. # Target Tokens |
|--------|-----------|------------------------|------------------------|
| bg | 13 149 | 23 | 24 |
| cs | 13 160 | 23 | 21 |
| da | 13 242 | 23 | 22 |
| de | 13 291 | 23 | 22 |
| el | 13 091 | 23 | 26 |
| es | 13 195 | 23 | 28 |
| et | 13 016 | 23 | 17 |
| fi | 12 942 | 23 | 16 |
| fr | 13 149 | 23 | 28 |
| ga | 412 | 12 | 12 |
| hr | 12 836 | 23 | 21 |
| hu | 13 025 | 23 | 21 |
| it | 13 059 | 23 | 25 |
| lt | 12 580 | 23 | 18 |
| lv | 13 044 | 23 | 19 |
| mt | 3 093 | 16 | 14 |
| nl | 13 191 | 23 | 25 |
| pl | 12 761 | 23 | 22 |
| pt | 13 148 | 23 | 26 |
| ro | 13 163 | 23 | 25 |
| sk | 12 926 | 23 | 20 |
| sl | 13 208 | 23 | 21 |
| sv | 13 099 | 23 | 21 |
|||||
| Total | 277 780 | 22.21 | 21.47 |
## Dataset Creation
### Curation Rationale
For details, check the corresponding [pages](https://elrc-share.eu/repository/search/?q=mfsp%3A87ef9e5e8ac411ea913100155d026706e19a1a9f908b463c944490c36ba2f454&page=3).
### Source Data
#### Initial Data Collection and Normalization
The acquisition of bilingual data (from multilingual websites), normalization, cleaning, deduplication and identification of parallel documents have been done by [ILSP-FC tool](http://nlp.ilsp.gr/redmine/projects/ilsp-fc/wiki/Introduction). [Maligna aligner](https://github.com/loomchild/maligna) was used for alignment of segments. Merging/filtering of segment pairs has also been applied.
#### Who are the source language producers?
Every data of this corpora as been uploaded by [Vassilis Papavassiliou](mailto:vpapa@ilsp.gr) on [ELRC-Share](https://elrc-share.eu/repository/browse/bilingual-corpus-from-the-publications-office-of-the-eu-on-the-medical-domain-v2-en-fr/6b31b32e8ac411ea913100155d0267061547d9b3ec284584af19a2953baa8937/).
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Other Known Limitations
The nature of the task introduce a variability in the quality of the target translations.
## Additional Information
### Dataset Curators
__ELRC-Medical-V2__: Labrak Yanis, Dufour Richard
__Bilingual corpus from the Publications Office of the EU on the medical domain v.2 (EN-XX) Corpus__: [Vassilis Papavassiliou](mailto:vpapa@ilsp.gr) and [others](https://live.european-language-grid.eu/catalogue/project/2209).
### Licensing Information
<a rel="license" href="https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf"><img alt="Attribution 4.0 International (CC BY 4.0) License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="https://elrc-share.eu/static/metashare/licences/CC-BY-4.0.pdf">Attribution 4.0 International (CC BY 4.0) License</a>.
### Citation Information
Please cite the following paper when using this model.
```latex
@inproceedings{losch-etal-2018-european,
title = European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management,
author = {
L'osch, Andrea and
Mapelli, Valérie and
Piperidis, Stelios and
Vasiljevs, Andrejs and
Smal, Lilli and
Declerck, Thierry and
Schnur, Eileen and
Choukri, Khalid and
van Genabith, Josef
},
booktitle = Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),
month = may,
year = 2018,
address = Miyazaki, Japan,
publisher = European Language Resources Association (ELRA),
url = https://aclanthology.org/L18-1213,
}
```
|
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
false |
# Dataset Card for Country211
The [Country 211 Dataset](https://github.com/openai/CLIP/blob/main/data/country211.md) from OpenAI.
This dataset was built by filtering the images from the YFCC100m dataset that have GPS coordinate corresponding to a ISO-3166 country code. The dataset is balanced by sampling 150 train images, 50 validation images, and 100 test images images for each country. |
true |
XFUND dataset
see more detail at [this](https://github.com/doc-analysis/XFUND)
### Citation Information
``` latex
@inproceedings{xu-etal-2022-xfund,
title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding",
author = "Xu, Yiheng and
Lv, Tengchao and
Cui, Lei and
Wang, Guoxin and
Lu, Yijuan and
Florencio, Dinei and
Zhang, Cha and
Wei, Furu",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.253",
doi = "10.18653/v1/2022.findings-acl.253",
pages = "3214--3224",
abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.",
}
``` |
false |
# CoNLL-NERC-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://www.cs.upc.edu/~nlp/tools/nerc/nerc.html
- **Point of Contact:** [Xavier Carreras](carreras@lsi.upc.es)
### Dataset Summary
CoNLL-NERC is the Spanish dataset of the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf). The dataset is annotated with four types of named entities --persons, locations, organizations, and other miscellaneous entities-- formatted in the standard Beginning-Inside-Outside (BIO) format. The corpus consists of 8,324 train sentences with 19,400 named entities, 1,916 development sentences with 4,568 named entities, and 1,518 test sentences with 3,644 named entities.
We use this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
Named Entity Recognition and Classification
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
<pre>
El DA O
Abogado NC B-PER
General AQ I-PER
del SP I-PER
Estado NC I-PER
, Fc O
Daryl VMI B-PER
Williams NC I-PER
, Fc O
subrayó VMI O
hoy RG O
la DA O
necesidad NC O
de SP O
tomar VMN O
medidas NC O
para SP O
proteger VMN O
al SP O
sistema NC O
judicial AQ O
australiano AQ O
frente RG O
a SP O
una DI O
página NC O
de SP O
internet NC O
que PR O
imposibilita VMI O
el DA O
cumplimiento NC O
de SP O
los DA O
principios NC O
básicos AQ O
de SP O
la DA O
Ley NC B-MISC
. Fp O
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one. The different files are separated by an empty line.
### Data Splits
- esp.train: 273037 lines
- esp.testa: 54837 lines (used as dev)
- esp.testb: 53049 lines (used as test)
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
#### Initial Data Collection and Normalization
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the source language producers?
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Annotations
#### Annotation process
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the annotators?
The annotation was carried out by the TALP Research Center2 of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC3 ) of the University of Barcelona (UB), and funded by the European Commission through the NAMIC pro ject (IST-1999-12392).
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset curators
### Licensing information
### Citation Information
The following paper must be cited when using this corpus:
Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
### Contributions
[N/A]
|
false |
# Dataset Card for ATCOSIM corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ATCOSIM homepage](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)
- **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic)
- **Paper:** [The ATCOSIM Corpus of Non-Prompted Clean Air Traffic Control Speech](https://aclanthology.org/L08-1507/)
- **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822)
### Dataset Summary
The ATCOSIM Air Traffic Control Simulation Speech corpus is a speech database of air traffic control (ATC) operator speech, provided by Graz University of Technology (TUG) and Eurocontrol Experimental Centre (EEC). It consists of ten hours of speech data, which were recorded during ATC real-time simulations using a close-talk headset microphone. The utterances are in English language and pronounced by ten non-native speakers. The database includes orthographic transcriptions and additional information on speakers and recording sessions. It was recorded and annotated by Konrad Hofbauer ([description here](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)).
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim).
### Languages and other details
The text and the recordings are in English. The participating controllers were all actively employed air traffic controllers and possessed professional experience in the simulated sectors. The six male and four female controllers were of either German or Swiss nationality and had German, Swiss German or Swiss French native tongue. The controllers had agreed to the recording of their voice for the purpose of language analysis as well as for research and development in speech technologies, and were asked to show their normal working behaviour.
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [ATCOSIM corpus](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html) creators.
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
Authors of the dataset:
```
@inproceedings{hofbauer-etal-2008-atcosim,
title = "The {ATCOSIM} Corpus of Non-Prompted Clean Air Traffic Control Speech",
author = "Hofbauer, Konrad and
Petrik, Stefan and
Hering, Horst",
booktitle = "Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}'08)",
month = may,
year = "2008",
address = "Marrakech, Morocco",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2008/pdf/545_paper.pdf",
}
```
|
false | # Dataset Card for "plant_species"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true |
# Dataset Card for "quora"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.kaggle.com/c/quora-question-pairs](https://www.kaggle.com/c/quora-question-pairs)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 55.48 MB
- **Size of the generated dataset:** 55.46 MB
- **Total amount of disk used:** 110.94 MB
### Dataset Summary
The Quora dataset is composed of question pairs, and the task is to determine if the questions are paraphrases of each other (have the same meaning).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 55.48 MB
- **Size of the generated dataset:** 55.46 MB
- **Total amount of disk used:** 110.94 MB
### Data Fields
The data fields are the same among all splits.
### Data Splits
| name |train |
|-------|-----:|
|default|404290|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Quora Term of Service](https://www.quora.com/about/tos), no commercial use.
### Citation Information
Unknown.
|
false |
<div align="center">
<img width="640" alt="trpakov/chest-xray-classification" src="https://huggingface.co/datasets/trpakov/chest-xray-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['PNEUMONIA', 'NORMAL']
```
### Number of Images
```json
{'test': 582, 'valid': 1165, 'train': 12230}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("trpakov/chest-xray-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/chest-x-rays-qjmia/dataset/3](https://universe.roboflow.com/mohamed-traore-2ekkp/chest-x-rays-qjmia/dataset/3?ref=roboflow2huggingface)
### Citation
```
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on December 8, 2021 at 12:45 AM GMT
It includes 13977 images.
Pneumonia are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
The following augmentation was applied to create 3 versions of each source image:
* Random shear of between -3° to +3° horizontally and -2° to +2° vertically
* Random brigthness adjustment of between -5 and +5 percent
* Random exposure adjustment of between -5 and +5 percent
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.